00:00:00.000 Started by upstream project "autotest-per-patch" build number 120580 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.046 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.047 The recommended git tool is: git 00:00:00.047 using credential 00000000-0000-0000-0000-000000000002 00:00:00.049 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.067 Fetching changes from the remote Git repository 00:00:00.071 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.093 Using shallow fetch with depth 1 00:00:00.094 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.094 > git --version # timeout=10 00:00:00.125 > git --version # 'git version 2.39.2' 00:00:00.125 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.126 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.126 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.486 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.497 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.508 Checking out Revision a704ed4d86859cb8cbec080c78b138476da6ee34 (FETCH_HEAD) 00:00:02.508 > git config core.sparsecheckout # timeout=10 00:00:02.518 > git read-tree -mu HEAD # timeout=10 00:00:02.534 > git checkout -f a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=5 00:00:02.551 Commit message: "packer: Insert post-processors only if at least one is defined" 00:00:02.551 > git rev-list --no-walk a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=10 00:00:02.649 [Pipeline] Start of Pipeline 00:00:02.663 [Pipeline] library 00:00:02.664 Loading library shm_lib@master 00:00:02.664 Library shm_lib@master is cached. Copying from home. 00:00:02.680 [Pipeline] node 00:00:02.691 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.693 [Pipeline] { 00:00:02.705 [Pipeline] catchError 00:00:02.707 [Pipeline] { 00:00:02.717 [Pipeline] wrap 00:00:02.724 [Pipeline] { 00:00:02.729 [Pipeline] stage 00:00:02.731 [Pipeline] { (Prologue) 00:00:02.883 [Pipeline] sh 00:00:03.169 + logger -p user.info -t JENKINS-CI 00:00:03.189 [Pipeline] echo 00:00:03.190 Node: GP11 00:00:03.197 [Pipeline] sh 00:00:03.494 [Pipeline] setCustomBuildProperty 00:00:03.508 [Pipeline] echo 00:00:03.510 Cleanup processes 00:00:03.517 [Pipeline] sh 00:00:03.802 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.802 1480464 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.816 [Pipeline] sh 00:00:04.103 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.103 ++ grep -v 'sudo pgrep' 00:00:04.103 ++ awk '{print $1}' 00:00:04.104 + sudo kill -9 00:00:04.104 + true 00:00:04.116 [Pipeline] cleanWs 00:00:04.123 [WS-CLEANUP] Deleting project workspace... 00:00:04.123 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.130 [WS-CLEANUP] done 00:00:04.133 [Pipeline] setCustomBuildProperty 00:00:04.143 [Pipeline] sh 00:00:04.421 + sudo git config --global --replace-all safe.directory '*' 00:00:04.496 [Pipeline] nodesByLabel 00:00:04.497 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.506 [Pipeline] httpRequest 00:00:04.510 HttpMethod: GET 00:00:04.511 URL: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:04.516 Sending request to url: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:04.522 Response Code: HTTP/1.1 200 OK 00:00:04.522 Success: Status code 200 is in the accepted range: 200,404 00:00:04.522 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:04.759 [Pipeline] sh 00:00:05.043 + tar --no-same-owner -xf jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:05.059 [Pipeline] httpRequest 00:00:05.063 HttpMethod: GET 00:00:05.063 URL: http://10.211.164.101/packages/spdk_ce34c7fd8070d809da49114e6354281a60a27df5.tar.gz 00:00:05.064 Sending request to url: http://10.211.164.101/packages/spdk_ce34c7fd8070d809da49114e6354281a60a27df5.tar.gz 00:00:05.067 Response Code: HTTP/1.1 200 OK 00:00:05.068 Success: Status code 200 is in the accepted range: 200,404 00:00:05.068 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_ce34c7fd8070d809da49114e6354281a60a27df5.tar.gz 00:00:23.842 [Pipeline] sh 00:00:24.150 + tar --no-same-owner -xf spdk_ce34c7fd8070d809da49114e6354281a60a27df5.tar.gz 00:00:26.694 [Pipeline] sh 00:00:26.979 + git -C spdk log --oneline -n5 00:00:26.979 ce34c7fd8 raid: move blocklen_shift to r5f_info 00:00:26.979 d61131ae0 raid5f: interleaved md support 00:00:26.979 02f0918d1 ut/raid: allow testing interleaved md 00:00:26.979 e69c6fb44 ut/raid: refactor setting test params 00:00:26.979 5e7f316cf raid: superblock interleaved md support 00:00:26.993 [Pipeline] } 00:00:27.012 [Pipeline] // stage 00:00:27.022 [Pipeline] stage 00:00:27.024 [Pipeline] { (Prepare) 00:00:27.044 [Pipeline] writeFile 00:00:27.063 [Pipeline] sh 00:00:27.354 + logger -p user.info -t JENKINS-CI 00:00:27.366 [Pipeline] sh 00:00:27.650 + logger -p user.info -t JENKINS-CI 00:00:27.662 [Pipeline] sh 00:00:27.946 + cat autorun-spdk.conf 00:00:27.946 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:27.946 SPDK_TEST_NVMF=1 00:00:27.946 SPDK_TEST_NVME_CLI=1 00:00:27.946 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:27.946 SPDK_TEST_NVMF_NICS=e810 00:00:27.946 SPDK_TEST_VFIOUSER=1 00:00:27.946 SPDK_RUN_UBSAN=1 00:00:27.946 NET_TYPE=phy 00:00:27.954 RUN_NIGHTLY=0 00:00:27.958 [Pipeline] readFile 00:00:27.984 [Pipeline] withEnv 00:00:27.987 [Pipeline] { 00:00:28.002 [Pipeline] sh 00:00:28.287 + set -ex 00:00:28.287 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:28.287 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:28.287 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.287 ++ SPDK_TEST_NVMF=1 00:00:28.287 ++ SPDK_TEST_NVME_CLI=1 00:00:28.287 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.287 ++ SPDK_TEST_NVMF_NICS=e810 00:00:28.287 ++ SPDK_TEST_VFIOUSER=1 00:00:28.287 ++ SPDK_RUN_UBSAN=1 00:00:28.287 ++ NET_TYPE=phy 00:00:28.287 ++ RUN_NIGHTLY=0 00:00:28.287 + case $SPDK_TEST_NVMF_NICS in 00:00:28.287 + DRIVERS=ice 00:00:28.287 + [[ tcp == \r\d\m\a ]] 00:00:28.287 + [[ -n ice ]] 00:00:28.287 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:28.287 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:28.287 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:28.287 rmmod: ERROR: Module irdma is not currently loaded 00:00:28.287 rmmod: ERROR: Module i40iw is not currently loaded 00:00:28.287 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:28.287 + true 00:00:28.287 + for D in $DRIVERS 00:00:28.287 + sudo modprobe ice 00:00:28.287 + exit 0 00:00:28.297 [Pipeline] } 00:00:28.315 [Pipeline] // withEnv 00:00:28.321 [Pipeline] } 00:00:28.339 [Pipeline] // stage 00:00:28.348 [Pipeline] catchError 00:00:28.350 [Pipeline] { 00:00:28.366 [Pipeline] timeout 00:00:28.366 Timeout set to expire in 40 min 00:00:28.368 [Pipeline] { 00:00:28.384 [Pipeline] stage 00:00:28.386 [Pipeline] { (Tests) 00:00:28.401 [Pipeline] sh 00:00:28.681 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:28.682 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:28.682 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:28.682 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:28.682 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:28.682 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:28.682 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:28.682 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:28.682 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:28.682 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:28.682 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:28.682 + source /etc/os-release 00:00:28.682 ++ NAME='Fedora Linux' 00:00:28.682 ++ VERSION='38 (Cloud Edition)' 00:00:28.682 ++ ID=fedora 00:00:28.682 ++ VERSION_ID=38 00:00:28.682 ++ VERSION_CODENAME= 00:00:28.682 ++ PLATFORM_ID=platform:f38 00:00:28.682 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:28.682 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:28.682 ++ LOGO=fedora-logo-icon 00:00:28.682 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:28.682 ++ HOME_URL=https://fedoraproject.org/ 00:00:28.682 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:28.682 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:28.682 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:28.682 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:28.682 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:28.682 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:28.682 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:28.682 ++ SUPPORT_END=2024-05-14 00:00:28.682 ++ VARIANT='Cloud Edition' 00:00:28.682 ++ VARIANT_ID=cloud 00:00:28.682 + uname -a 00:00:28.682 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:28.682 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:29.620 Hugepages 00:00:29.620 node hugesize free / total 00:00:29.620 node0 1048576kB 0 / 0 00:00:29.620 node0 2048kB 0 / 0 00:00:29.620 node1 1048576kB 0 / 0 00:00:29.620 node1 2048kB 0 / 0 00:00:29.620 00:00:29.620 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:29.620 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:29.620 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:29.620 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:29.620 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:29.620 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:29.620 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:29.620 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:29.620 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:29.620 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:29.620 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:29.620 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:29.620 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:29.620 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:29.620 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:29.620 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:29.620 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:29.620 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:29.620 + rm -f /tmp/spdk-ld-path 00:00:29.620 + source autorun-spdk.conf 00:00:29.620 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.620 ++ SPDK_TEST_NVMF=1 00:00:29.620 ++ SPDK_TEST_NVME_CLI=1 00:00:29.620 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.620 ++ SPDK_TEST_NVMF_NICS=e810 00:00:29.620 ++ SPDK_TEST_VFIOUSER=1 00:00:29.620 ++ SPDK_RUN_UBSAN=1 00:00:29.620 ++ NET_TYPE=phy 00:00:29.620 ++ RUN_NIGHTLY=0 00:00:29.620 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:29.620 + [[ -n '' ]] 00:00:29.620 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:29.881 + for M in /var/spdk/build-*-manifest.txt 00:00:29.881 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:29.881 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:29.881 + for M in /var/spdk/build-*-manifest.txt 00:00:29.881 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:29.881 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:29.881 ++ uname 00:00:29.881 + [[ Linux == \L\i\n\u\x ]] 00:00:29.881 + sudo dmesg -T 00:00:29.881 + sudo dmesg --clear 00:00:29.881 + dmesg_pid=1481126 00:00:29.881 + [[ Fedora Linux == FreeBSD ]] 00:00:29.881 + sudo dmesg -Tw 00:00:29.881 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:29.881 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:29.881 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:29.881 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:29.881 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:29.881 + [[ -x /usr/src/fio-static/fio ]] 00:00:29.881 + export FIO_BIN=/usr/src/fio-static/fio 00:00:29.881 + FIO_BIN=/usr/src/fio-static/fio 00:00:29.881 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:29.881 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:29.881 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:29.881 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:29.881 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:29.881 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:29.881 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:29.881 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:29.881 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:29.881 Test configuration: 00:00:29.881 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.881 SPDK_TEST_NVMF=1 00:00:29.881 SPDK_TEST_NVME_CLI=1 00:00:29.881 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.881 SPDK_TEST_NVMF_NICS=e810 00:00:29.881 SPDK_TEST_VFIOUSER=1 00:00:29.881 SPDK_RUN_UBSAN=1 00:00:29.881 NET_TYPE=phy 00:00:29.882 RUN_NIGHTLY=0 16:47:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:29.882 16:47:45 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:29.882 16:47:45 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:29.882 16:47:45 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:29.882 16:47:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:29.882 16:47:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:29.882 16:47:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:29.882 16:47:45 -- paths/export.sh@5 -- $ export PATH 00:00:29.882 16:47:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:29.882 16:47:45 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:29.882 16:47:45 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:29.882 16:47:45 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713451665.XXXXXX 00:00:29.882 16:47:45 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713451665.4iUR7e 00:00:29.882 16:47:45 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:29.882 16:47:45 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:29.882 16:47:45 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:29.882 16:47:45 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:29.882 16:47:45 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:29.882 16:47:45 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:29.882 16:47:45 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:29.882 16:47:45 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.882 16:47:45 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:29.882 16:47:45 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:29.882 16:47:45 -- pm/common@17 -- $ local monitor 00:00:29.882 16:47:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:29.882 16:47:45 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1481160 00:00:29.882 16:47:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:29.882 16:47:45 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1481162 00:00:29.882 16:47:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:29.882 16:47:45 -- pm/common@21 -- $ date +%s 00:00:29.882 16:47:45 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1481164 00:00:29.882 16:47:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:29.882 16:47:45 -- pm/common@21 -- $ date +%s 00:00:29.882 16:47:45 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1481167 00:00:29.882 16:47:45 -- pm/common@26 -- $ sleep 1 00:00:29.882 16:47:45 -- pm/common@21 -- $ date +%s 00:00:29.882 16:47:45 -- pm/common@21 -- $ date +%s 00:00:29.882 16:47:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713451665 00:00:29.882 16:47:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713451665 00:00:29.882 16:47:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713451665 00:00:29.882 16:47:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713451665 00:00:29.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713451665_collect-vmstat.pm.log 00:00:29.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713451665_collect-bmc-pm.bmc.pm.log 00:00:29.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713451665_collect-cpu-load.pm.log 00:00:29.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713451665_collect-cpu-temp.pm.log 00:00:30.821 16:47:46 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:30.821 16:47:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:30.821 16:47:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:30.821 16:47:46 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:30.821 16:47:46 -- spdk/autobuild.sh@16 -- $ date -u 00:00:30.821 Thu Apr 18 02:47:46 PM UTC 2024 00:00:30.821 16:47:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:30.821 v24.05-pre-417-gce34c7fd8 00:00:30.821 16:47:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:30.821 16:47:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:30.821 16:47:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:30.821 16:47:46 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:30.821 16:47:46 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:30.821 16:47:46 -- common/autotest_common.sh@10 -- $ set +x 00:00:31.080 ************************************ 00:00:31.080 START TEST ubsan 00:00:31.080 ************************************ 00:00:31.080 16:47:46 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:31.080 using ubsan 00:00:31.080 00:00:31.080 real 0m0.000s 00:00:31.080 user 0m0.000s 00:00:31.080 sys 0m0.000s 00:00:31.080 16:47:46 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:31.080 16:47:46 -- common/autotest_common.sh@10 -- $ set +x 00:00:31.080 ************************************ 00:00:31.080 END TEST ubsan 00:00:31.080 ************************************ 00:00:31.080 16:47:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:31.080 16:47:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:31.080 16:47:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:31.080 16:47:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:31.080 16:47:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:31.080 16:47:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:31.080 16:47:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:31.080 16:47:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:31.080 16:47:46 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:31.080 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:31.080 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:31.340 Using 'verbs' RDMA provider 00:00:41.893 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:51.901 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:51.901 Creating mk/config.mk...done. 00:00:51.901 Creating mk/cc.flags.mk...done. 00:00:51.901 Type 'make' to build. 00:00:51.901 16:48:06 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:51.901 16:48:06 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:51.901 16:48:06 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:51.901 16:48:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.901 ************************************ 00:00:51.901 START TEST make 00:00:51.901 ************************************ 00:00:51.901 16:48:06 -- common/autotest_common.sh@1111 -- $ make -j48 00:00:51.901 make[1]: Nothing to be done for 'all'. 00:00:52.850 The Meson build system 00:00:52.850 Version: 1.3.1 00:00:52.850 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:52.850 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:52.850 Build type: native build 00:00:52.850 Project name: libvfio-user 00:00:52.850 Project version: 0.0.1 00:00:52.850 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:52.850 C linker for the host machine: cc ld.bfd 2.39-16 00:00:52.850 Host machine cpu family: x86_64 00:00:52.850 Host machine cpu: x86_64 00:00:52.850 Run-time dependency threads found: YES 00:00:52.850 Library dl found: YES 00:00:52.850 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:52.850 Run-time dependency json-c found: YES 0.17 00:00:52.850 Run-time dependency cmocka found: YES 1.1.7 00:00:52.850 Program pytest-3 found: NO 00:00:52.850 Program flake8 found: NO 00:00:52.850 Program misspell-fixer found: NO 00:00:52.850 Program restructuredtext-lint found: NO 00:00:52.850 Program valgrind found: YES (/usr/bin/valgrind) 00:00:52.850 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:52.850 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:52.850 Compiler for C supports arguments -Wwrite-strings: YES 00:00:52.850 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:52.850 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:52.850 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:52.850 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:52.850 Build targets in project: 8 00:00:52.850 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:52.850 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:52.850 00:00:52.850 libvfio-user 0.0.1 00:00:52.850 00:00:52.850 User defined options 00:00:52.850 buildtype : debug 00:00:52.850 default_library: shared 00:00:52.850 libdir : /usr/local/lib 00:00:52.850 00:00:52.850 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:53.796 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:53.796 [1/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:53.796 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:53.796 [3/37] Compiling C object samples/null.p/null.c.o 00:00:53.796 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:53.796 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:53.796 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:53.796 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:53.796 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:53.796 [9/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:53.796 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:53.796 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:53.796 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:54.059 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:54.059 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:54.059 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:54.059 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:54.059 [17/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:54.059 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:54.059 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:54.059 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:54.059 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:54.059 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:54.059 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:54.059 [24/37] Compiling C object samples/server.p/server.c.o 00:00:54.059 [25/37] Compiling C object samples/client.p/client.c.o 00:00:54.059 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:54.059 [27/37] Linking target samples/client 00:00:54.059 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:54.322 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:54.322 [30/37] Linking target test/unit_tests 00:00:54.322 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:00:54.581 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:54.581 [33/37] Linking target samples/null 00:00:54.581 [34/37] Linking target samples/server 00:00:54.581 [35/37] Linking target samples/lspci 00:00:54.581 [36/37] Linking target samples/gpio-pci-idio-16 00:00:54.581 [37/37] Linking target samples/shadow_ioeventfd_server 00:00:54.581 INFO: autodetecting backend as ninja 00:00:54.581 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:54.844 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:55.414 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:55.414 ninja: no work to do. 00:00:59.602 The Meson build system 00:00:59.602 Version: 1.3.1 00:00:59.602 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:00:59.602 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:00:59.602 Build type: native build 00:00:59.602 Program cat found: YES (/usr/bin/cat) 00:00:59.602 Project name: DPDK 00:00:59.602 Project version: 23.11.0 00:00:59.602 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:59.602 C linker for the host machine: cc ld.bfd 2.39-16 00:00:59.602 Host machine cpu family: x86_64 00:00:59.602 Host machine cpu: x86_64 00:00:59.602 Message: ## Building in Developer Mode ## 00:00:59.602 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:59.602 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:00:59.602 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:00:59.602 Program python3 found: YES (/usr/bin/python3) 00:00:59.602 Program cat found: YES (/usr/bin/cat) 00:00:59.602 Compiler for C supports arguments -march=native: YES 00:00:59.602 Checking for size of "void *" : 8 00:00:59.602 Checking for size of "void *" : 8 (cached) 00:00:59.602 Library m found: YES 00:00:59.602 Library numa found: YES 00:00:59.602 Has header "numaif.h" : YES 00:00:59.602 Library fdt found: NO 00:00:59.602 Library execinfo found: NO 00:00:59.602 Has header "execinfo.h" : YES 00:00:59.602 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:59.602 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:59.602 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:59.602 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:59.602 Run-time dependency openssl found: YES 3.0.9 00:00:59.602 Run-time dependency libpcap found: YES 1.10.4 00:00:59.602 Has header "pcap.h" with dependency libpcap: YES 00:00:59.602 Compiler for C supports arguments -Wcast-qual: YES 00:00:59.602 Compiler for C supports arguments -Wdeprecated: YES 00:00:59.603 Compiler for C supports arguments -Wformat: YES 00:00:59.603 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:59.603 Compiler for C supports arguments -Wformat-security: NO 00:00:59.603 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:59.603 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:59.603 Compiler for C supports arguments -Wnested-externs: YES 00:00:59.603 Compiler for C supports arguments -Wold-style-definition: YES 00:00:59.603 Compiler for C supports arguments -Wpointer-arith: YES 00:00:59.603 Compiler for C supports arguments -Wsign-compare: YES 00:00:59.603 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:59.603 Compiler for C supports arguments -Wundef: YES 00:00:59.603 Compiler for C supports arguments -Wwrite-strings: YES 00:00:59.603 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:59.603 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:59.603 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:59.603 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:59.603 Program objdump found: YES (/usr/bin/objdump) 00:00:59.603 Compiler for C supports arguments -mavx512f: YES 00:00:59.603 Checking if "AVX512 checking" compiles: YES 00:00:59.603 Fetching value of define "__SSE4_2__" : 1 00:00:59.603 Fetching value of define "__AES__" : 1 00:00:59.603 Fetching value of define "__AVX__" : 1 00:00:59.603 Fetching value of define "__AVX2__" : (undefined) 00:00:59.603 Fetching value of define "__AVX512BW__" : (undefined) 00:00:59.603 Fetching value of define "__AVX512CD__" : (undefined) 00:00:59.603 Fetching value of define "__AVX512DQ__" : (undefined) 00:00:59.603 Fetching value of define "__AVX512F__" : (undefined) 00:00:59.603 Fetching value of define "__AVX512VL__" : (undefined) 00:00:59.603 Fetching value of define "__PCLMUL__" : 1 00:00:59.603 Fetching value of define "__RDRND__" : 1 00:00:59.603 Fetching value of define "__RDSEED__" : (undefined) 00:00:59.603 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:59.603 Fetching value of define "__znver1__" : (undefined) 00:00:59.603 Fetching value of define "__znver2__" : (undefined) 00:00:59.603 Fetching value of define "__znver3__" : (undefined) 00:00:59.603 Fetching value of define "__znver4__" : (undefined) 00:00:59.603 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:59.603 Message: lib/log: Defining dependency "log" 00:00:59.603 Message: lib/kvargs: Defining dependency "kvargs" 00:00:59.603 Message: lib/telemetry: Defining dependency "telemetry" 00:00:59.603 Checking for function "getentropy" : NO 00:00:59.603 Message: lib/eal: Defining dependency "eal" 00:00:59.603 Message: lib/ring: Defining dependency "ring" 00:00:59.603 Message: lib/rcu: Defining dependency "rcu" 00:00:59.603 Message: lib/mempool: Defining dependency "mempool" 00:00:59.603 Message: lib/mbuf: Defining dependency "mbuf" 00:00:59.603 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:59.603 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:59.603 Compiler for C supports arguments -mpclmul: YES 00:00:59.603 Compiler for C supports arguments -maes: YES 00:00:59.603 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:59.603 Compiler for C supports arguments -mavx512bw: YES 00:00:59.603 Compiler for C supports arguments -mavx512dq: YES 00:00:59.603 Compiler for C supports arguments -mavx512vl: YES 00:00:59.603 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:59.603 Compiler for C supports arguments -mavx2: YES 00:00:59.603 Compiler for C supports arguments -mavx: YES 00:00:59.603 Message: lib/net: Defining dependency "net" 00:00:59.603 Message: lib/meter: Defining dependency "meter" 00:00:59.603 Message: lib/ethdev: Defining dependency "ethdev" 00:00:59.603 Message: lib/pci: Defining dependency "pci" 00:00:59.603 Message: lib/cmdline: Defining dependency "cmdline" 00:00:59.603 Message: lib/hash: Defining dependency "hash" 00:00:59.603 Message: lib/timer: Defining dependency "timer" 00:00:59.603 Message: lib/compressdev: Defining dependency "compressdev" 00:00:59.603 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:59.603 Message: lib/dmadev: Defining dependency "dmadev" 00:00:59.603 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:59.603 Message: lib/power: Defining dependency "power" 00:00:59.603 Message: lib/reorder: Defining dependency "reorder" 00:00:59.603 Message: lib/security: Defining dependency "security" 00:00:59.603 Has header "linux/userfaultfd.h" : YES 00:00:59.603 Has header "linux/vduse.h" : YES 00:00:59.603 Message: lib/vhost: Defining dependency "vhost" 00:00:59.603 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:59.603 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:59.603 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:59.603 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:59.603 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:00:59.603 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:00:59.603 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:00:59.603 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:00:59.603 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:00:59.603 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:00:59.603 Program doxygen found: YES (/usr/bin/doxygen) 00:00:59.603 Configuring doxy-api-html.conf using configuration 00:00:59.603 Configuring doxy-api-man.conf using configuration 00:00:59.603 Program mandb found: YES (/usr/bin/mandb) 00:00:59.603 Program sphinx-build found: NO 00:00:59.603 Configuring rte_build_config.h using configuration 00:00:59.603 Message: 00:00:59.603 ================= 00:00:59.603 Applications Enabled 00:00:59.603 ================= 00:00:59.603 00:00:59.603 apps: 00:00:59.603 00:00:59.603 00:00:59.603 Message: 00:00:59.603 ================= 00:00:59.603 Libraries Enabled 00:00:59.603 ================= 00:00:59.603 00:00:59.603 libs: 00:00:59.603 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:00:59.603 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:00:59.603 cryptodev, dmadev, power, reorder, security, vhost, 00:00:59.603 00:00:59.603 Message: 00:00:59.603 =============== 00:00:59.603 Drivers Enabled 00:00:59.603 =============== 00:00:59.603 00:00:59.603 common: 00:00:59.603 00:00:59.603 bus: 00:00:59.603 pci, vdev, 00:00:59.603 mempool: 00:00:59.603 ring, 00:00:59.603 dma: 00:00:59.603 00:00:59.603 net: 00:00:59.603 00:00:59.603 crypto: 00:00:59.603 00:00:59.603 compress: 00:00:59.603 00:00:59.603 vdpa: 00:00:59.603 00:00:59.603 00:00:59.603 Message: 00:00:59.603 ================= 00:00:59.603 Content Skipped 00:00:59.603 ================= 00:00:59.603 00:00:59.603 apps: 00:00:59.603 dumpcap: explicitly disabled via build config 00:00:59.603 graph: explicitly disabled via build config 00:00:59.603 pdump: explicitly disabled via build config 00:00:59.603 proc-info: explicitly disabled via build config 00:00:59.603 test-acl: explicitly disabled via build config 00:00:59.603 test-bbdev: explicitly disabled via build config 00:00:59.603 test-cmdline: explicitly disabled via build config 00:00:59.603 test-compress-perf: explicitly disabled via build config 00:00:59.603 test-crypto-perf: explicitly disabled via build config 00:00:59.603 test-dma-perf: explicitly disabled via build config 00:00:59.603 test-eventdev: explicitly disabled via build config 00:00:59.603 test-fib: explicitly disabled via build config 00:00:59.603 test-flow-perf: explicitly disabled via build config 00:00:59.603 test-gpudev: explicitly disabled via build config 00:00:59.603 test-mldev: explicitly disabled via build config 00:00:59.603 test-pipeline: explicitly disabled via build config 00:00:59.603 test-pmd: explicitly disabled via build config 00:00:59.603 test-regex: explicitly disabled via build config 00:00:59.603 test-sad: explicitly disabled via build config 00:00:59.603 test-security-perf: explicitly disabled via build config 00:00:59.603 00:00:59.603 libs: 00:00:59.603 metrics: explicitly disabled via build config 00:00:59.603 acl: explicitly disabled via build config 00:00:59.603 bbdev: explicitly disabled via build config 00:00:59.603 bitratestats: explicitly disabled via build config 00:00:59.603 bpf: explicitly disabled via build config 00:00:59.603 cfgfile: explicitly disabled via build config 00:00:59.603 distributor: explicitly disabled via build config 00:00:59.603 efd: explicitly disabled via build config 00:00:59.603 eventdev: explicitly disabled via build config 00:00:59.603 dispatcher: explicitly disabled via build config 00:00:59.603 gpudev: explicitly disabled via build config 00:00:59.603 gro: explicitly disabled via build config 00:00:59.603 gso: explicitly disabled via build config 00:00:59.603 ip_frag: explicitly disabled via build config 00:00:59.603 jobstats: explicitly disabled via build config 00:00:59.603 latencystats: explicitly disabled via build config 00:00:59.603 lpm: explicitly disabled via build config 00:00:59.603 member: explicitly disabled via build config 00:00:59.603 pcapng: explicitly disabled via build config 00:00:59.603 rawdev: explicitly disabled via build config 00:00:59.603 regexdev: explicitly disabled via build config 00:00:59.603 mldev: explicitly disabled via build config 00:00:59.603 rib: explicitly disabled via build config 00:00:59.603 sched: explicitly disabled via build config 00:00:59.603 stack: explicitly disabled via build config 00:00:59.603 ipsec: explicitly disabled via build config 00:00:59.603 pdcp: explicitly disabled via build config 00:00:59.603 fib: explicitly disabled via build config 00:00:59.603 port: explicitly disabled via build config 00:00:59.603 pdump: explicitly disabled via build config 00:00:59.603 table: explicitly disabled via build config 00:00:59.603 pipeline: explicitly disabled via build config 00:00:59.603 graph: explicitly disabled via build config 00:00:59.603 node: explicitly disabled via build config 00:00:59.603 00:00:59.603 drivers: 00:00:59.603 common/cpt: not in enabled drivers build config 00:00:59.603 common/dpaax: not in enabled drivers build config 00:00:59.603 common/iavf: not in enabled drivers build config 00:00:59.603 common/idpf: not in enabled drivers build config 00:00:59.603 common/mvep: not in enabled drivers build config 00:00:59.603 common/octeontx: not in enabled drivers build config 00:00:59.603 bus/auxiliary: not in enabled drivers build config 00:00:59.603 bus/cdx: not in enabled drivers build config 00:00:59.603 bus/dpaa: not in enabled drivers build config 00:00:59.603 bus/fslmc: not in enabled drivers build config 00:00:59.604 bus/ifpga: not in enabled drivers build config 00:00:59.604 bus/platform: not in enabled drivers build config 00:00:59.604 bus/vmbus: not in enabled drivers build config 00:00:59.604 common/cnxk: not in enabled drivers build config 00:00:59.604 common/mlx5: not in enabled drivers build config 00:00:59.604 common/nfp: not in enabled drivers build config 00:00:59.604 common/qat: not in enabled drivers build config 00:00:59.604 common/sfc_efx: not in enabled drivers build config 00:00:59.604 mempool/bucket: not in enabled drivers build config 00:00:59.604 mempool/cnxk: not in enabled drivers build config 00:00:59.604 mempool/dpaa: not in enabled drivers build config 00:00:59.604 mempool/dpaa2: not in enabled drivers build config 00:00:59.604 mempool/octeontx: not in enabled drivers build config 00:00:59.604 mempool/stack: not in enabled drivers build config 00:00:59.604 dma/cnxk: not in enabled drivers build config 00:00:59.604 dma/dpaa: not in enabled drivers build config 00:00:59.604 dma/dpaa2: not in enabled drivers build config 00:00:59.604 dma/hisilicon: not in enabled drivers build config 00:00:59.604 dma/idxd: not in enabled drivers build config 00:00:59.604 dma/ioat: not in enabled drivers build config 00:00:59.604 dma/skeleton: not in enabled drivers build config 00:00:59.604 net/af_packet: not in enabled drivers build config 00:00:59.604 net/af_xdp: not in enabled drivers build config 00:00:59.604 net/ark: not in enabled drivers build config 00:00:59.604 net/atlantic: not in enabled drivers build config 00:00:59.604 net/avp: not in enabled drivers build config 00:00:59.604 net/axgbe: not in enabled drivers build config 00:00:59.604 net/bnx2x: not in enabled drivers build config 00:00:59.604 net/bnxt: not in enabled drivers build config 00:00:59.604 net/bonding: not in enabled drivers build config 00:00:59.604 net/cnxk: not in enabled drivers build config 00:00:59.604 net/cpfl: not in enabled drivers build config 00:00:59.604 net/cxgbe: not in enabled drivers build config 00:00:59.604 net/dpaa: not in enabled drivers build config 00:00:59.604 net/dpaa2: not in enabled drivers build config 00:00:59.604 net/e1000: not in enabled drivers build config 00:00:59.604 net/ena: not in enabled drivers build config 00:00:59.604 net/enetc: not in enabled drivers build config 00:00:59.604 net/enetfec: not in enabled drivers build config 00:00:59.604 net/enic: not in enabled drivers build config 00:00:59.604 net/failsafe: not in enabled drivers build config 00:00:59.604 net/fm10k: not in enabled drivers build config 00:00:59.604 net/gve: not in enabled drivers build config 00:00:59.604 net/hinic: not in enabled drivers build config 00:00:59.604 net/hns3: not in enabled drivers build config 00:00:59.604 net/i40e: not in enabled drivers build config 00:00:59.604 net/iavf: not in enabled drivers build config 00:00:59.604 net/ice: not in enabled drivers build config 00:00:59.604 net/idpf: not in enabled drivers build config 00:00:59.604 net/igc: not in enabled drivers build config 00:00:59.604 net/ionic: not in enabled drivers build config 00:00:59.604 net/ipn3ke: not in enabled drivers build config 00:00:59.604 net/ixgbe: not in enabled drivers build config 00:00:59.604 net/mana: not in enabled drivers build config 00:00:59.604 net/memif: not in enabled drivers build config 00:00:59.604 net/mlx4: not in enabled drivers build config 00:00:59.604 net/mlx5: not in enabled drivers build config 00:00:59.604 net/mvneta: not in enabled drivers build config 00:00:59.604 net/mvpp2: not in enabled drivers build config 00:00:59.604 net/netvsc: not in enabled drivers build config 00:00:59.604 net/nfb: not in enabled drivers build config 00:00:59.604 net/nfp: not in enabled drivers build config 00:00:59.604 net/ngbe: not in enabled drivers build config 00:00:59.604 net/null: not in enabled drivers build config 00:00:59.604 net/octeontx: not in enabled drivers build config 00:00:59.604 net/octeon_ep: not in enabled drivers build config 00:00:59.604 net/pcap: not in enabled drivers build config 00:00:59.604 net/pfe: not in enabled drivers build config 00:00:59.604 net/qede: not in enabled drivers build config 00:00:59.604 net/ring: not in enabled drivers build config 00:00:59.604 net/sfc: not in enabled drivers build config 00:00:59.604 net/softnic: not in enabled drivers build config 00:00:59.604 net/tap: not in enabled drivers build config 00:00:59.604 net/thunderx: not in enabled drivers build config 00:00:59.604 net/txgbe: not in enabled drivers build config 00:00:59.604 net/vdev_netvsc: not in enabled drivers build config 00:00:59.604 net/vhost: not in enabled drivers build config 00:00:59.604 net/virtio: not in enabled drivers build config 00:00:59.604 net/vmxnet3: not in enabled drivers build config 00:00:59.604 raw/*: missing internal dependency, "rawdev" 00:00:59.604 crypto/armv8: not in enabled drivers build config 00:00:59.604 crypto/bcmfs: not in enabled drivers build config 00:00:59.604 crypto/caam_jr: not in enabled drivers build config 00:00:59.604 crypto/ccp: not in enabled drivers build config 00:00:59.604 crypto/cnxk: not in enabled drivers build config 00:00:59.604 crypto/dpaa_sec: not in enabled drivers build config 00:00:59.604 crypto/dpaa2_sec: not in enabled drivers build config 00:00:59.604 crypto/ipsec_mb: not in enabled drivers build config 00:00:59.604 crypto/mlx5: not in enabled drivers build config 00:00:59.604 crypto/mvsam: not in enabled drivers build config 00:00:59.604 crypto/nitrox: not in enabled drivers build config 00:00:59.604 crypto/null: not in enabled drivers build config 00:00:59.604 crypto/octeontx: not in enabled drivers build config 00:00:59.604 crypto/openssl: not in enabled drivers build config 00:00:59.604 crypto/scheduler: not in enabled drivers build config 00:00:59.604 crypto/uadk: not in enabled drivers build config 00:00:59.604 crypto/virtio: not in enabled drivers build config 00:00:59.604 compress/isal: not in enabled drivers build config 00:00:59.604 compress/mlx5: not in enabled drivers build config 00:00:59.604 compress/octeontx: not in enabled drivers build config 00:00:59.604 compress/zlib: not in enabled drivers build config 00:00:59.604 regex/*: missing internal dependency, "regexdev" 00:00:59.604 ml/*: missing internal dependency, "mldev" 00:00:59.604 vdpa/ifc: not in enabled drivers build config 00:00:59.604 vdpa/mlx5: not in enabled drivers build config 00:00:59.604 vdpa/nfp: not in enabled drivers build config 00:00:59.604 vdpa/sfc: not in enabled drivers build config 00:00:59.604 event/*: missing internal dependency, "eventdev" 00:00:59.604 baseband/*: missing internal dependency, "bbdev" 00:00:59.604 gpu/*: missing internal dependency, "gpudev" 00:00:59.604 00:00:59.604 00:00:59.862 Build targets in project: 85 00:00:59.862 00:00:59.862 DPDK 23.11.0 00:00:59.862 00:00:59.862 User defined options 00:00:59.862 buildtype : debug 00:00:59.862 default_library : shared 00:00:59.862 libdir : lib 00:00:59.862 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:59.862 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:00:59.862 c_link_args : 00:00:59.862 cpu_instruction_set: native 00:00:59.862 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:00:59.862 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:00:59.862 enable_docs : false 00:00:59.862 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:00:59.862 enable_kmods : false 00:00:59.862 tests : false 00:00:59.862 00:00:59.862 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:00.437 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:00.437 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:00.437 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:00.437 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:00.437 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:00.437 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:00.437 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:00.437 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:00.437 [8/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:00.437 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:00.437 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:00.437 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:00.437 [12/265] Linking static target lib/librte_kvargs.a 00:01:00.437 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:00.437 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:00.437 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:00.437 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:00.437 [17/265] Linking static target lib/librte_log.a 00:01:00.437 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:00.437 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:00.697 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:00.697 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:00.959 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.219 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:01.219 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:01.219 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:01.219 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:01.219 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:01.219 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:01.219 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:01.219 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:01.219 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:01.219 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:01.219 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:01.219 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:01.219 [35/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:01.219 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:01.219 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:01.219 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:01.219 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:01.219 [40/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:01.219 [41/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:01.219 [42/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:01.219 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:01.487 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:01.487 [45/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:01.487 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:01.487 [47/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:01.487 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:01.487 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:01.487 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:01.487 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:01.487 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:01.487 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:01.487 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:01.487 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:01.487 [56/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:01.487 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:01.487 [58/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:01.487 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:01.487 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:01.487 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:01.487 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:01.487 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:01.487 [64/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:01.487 [65/265] Linking static target lib/librte_telemetry.a 00:01:01.747 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:01.747 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:01.747 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:01.747 [69/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:01.747 [70/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.747 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:01.747 [72/265] Linking static target lib/librte_pci.a 00:01:01.747 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:01.747 [74/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:01.747 [75/265] Linking target lib/librte_log.so.24.0 00:01:01.747 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:01.747 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:01.747 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:01.747 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:01.747 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:01.747 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:02.009 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:02.009 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:02.009 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:02.009 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:02.009 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:02.009 [87/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:02.009 [88/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:02.009 [89/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:02.009 [90/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:02.268 [91/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:02.268 [92/265] Linking target lib/librte_kvargs.so.24.0 00:01:02.268 [93/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:02.268 [94/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:02.268 [95/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.268 [96/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:02.268 [97/265] Linking static target lib/librte_eal.a 00:01:02.268 [98/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:02.268 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:02.268 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:02.268 [101/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:02.268 [102/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:02.268 [103/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:02.268 [104/265] Linking static target lib/librte_ring.a 00:01:02.268 [105/265] Linking static target lib/librte_meter.a 00:01:02.268 [106/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:02.529 [107/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:02.529 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:02.529 [109/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:02.529 [110/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:02.529 [111/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:02.529 [112/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:02.529 [113/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:02.529 [114/265] Linking static target lib/librte_rcu.a 00:01:02.529 [115/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:02.529 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:02.529 [117/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:02.529 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:02.529 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:02.529 [120/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:02.529 [121/265] Linking static target lib/librte_mempool.a 00:01:02.529 [122/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.529 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:02.529 [124/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:02.529 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:02.529 [126/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:02.794 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:02.795 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:02.795 [129/265] Linking target lib/librte_telemetry.so.24.0 00:01:02.795 [130/265] Linking static target lib/librte_cmdline.a 00:01:02.795 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:02.795 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:02.795 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:02.795 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:02.795 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:02.795 [136/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.795 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:02.795 [138/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:03.058 [139/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:03.058 [140/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:03.058 [141/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.058 [142/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:03.058 [143/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:03.058 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:03.058 [145/265] Linking static target lib/librte_net.a 00:01:03.058 [146/265] Linking static target lib/librte_timer.a 00:01:03.058 [147/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:03.058 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:03.058 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:03.058 [150/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.058 [151/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:03.318 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:03.318 [153/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:03.318 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:03.318 [155/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:03.318 [156/265] Linking static target lib/librte_dmadev.a 00:01:03.318 [157/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.318 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:03.318 [159/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:03.318 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:03.576 [161/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.576 [162/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:03.576 [163/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:03.576 [164/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:03.576 [165/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.576 [166/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:03.576 [167/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:03.576 [168/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:03.576 [169/265] Linking static target lib/librte_hash.a 00:01:03.576 [170/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:03.576 [171/265] Linking static target lib/librte_compressdev.a 00:01:03.576 [172/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:03.576 [173/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:03.576 [174/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:03.576 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:03.576 [176/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:03.576 [177/265] Linking static target lib/librte_power.a 00:01:03.836 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:03.836 [179/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.836 [180/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.836 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:03.836 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:03.836 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:03.836 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:03.836 [185/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:03.836 [186/265] Linking static target lib/librte_reorder.a 00:01:03.836 [187/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:03.836 [188/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:03.836 [189/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:03.836 [190/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:04.094 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:04.094 [192/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.094 [193/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:04.094 [194/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.094 [195/265] Linking static target lib/librte_security.a 00:01:04.094 [196/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:04.094 [197/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:04.094 [198/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:04.094 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:04.094 [200/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:04.094 [201/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:04.094 [202/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:04.094 [203/265] Linking static target drivers/librte_bus_vdev.a 00:01:04.094 [204/265] Linking static target lib/librte_mbuf.a 00:01:04.094 [205/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:04.094 [206/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:04.094 [207/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:04.094 [208/265] Linking static target drivers/librte_bus_pci.a 00:01:04.094 [209/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.094 [210/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.353 [211/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:04.353 [212/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:04.353 [213/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:04.353 [214/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:04.353 [215/265] Linking static target drivers/librte_mempool_ring.a 00:01:04.353 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.353 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.610 [218/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.610 [219/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:04.610 [220/265] Linking static target lib/librte_ethdev.a 00:01:04.610 [221/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.611 [222/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:04.611 [223/265] Linking static target lib/librte_cryptodev.a 00:01:05.546 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.920 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:08.823 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.823 [227/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.823 [228/265] Linking target lib/librte_eal.so.24.0 00:01:08.823 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:08.823 [230/265] Linking target lib/librte_timer.so.24.0 00:01:08.823 [231/265] Linking target lib/librte_ring.so.24.0 00:01:08.823 [232/265] Linking target lib/librte_meter.so.24.0 00:01:08.823 [233/265] Linking target lib/librte_pci.so.24.0 00:01:08.823 [234/265] Linking target lib/librte_dmadev.so.24.0 00:01:08.823 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:08.823 [236/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:08.823 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:08.823 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:08.823 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:08.823 [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:08.823 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:08.823 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:08.823 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:09.131 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:09.131 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:09.131 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:09.131 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:09.131 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:09.131 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:09.131 [250/265] Linking target lib/librte_cryptodev.so.24.0 00:01:09.131 [251/265] Linking target lib/librte_compressdev.so.24.0 00:01:09.131 [252/265] Linking target lib/librte_net.so.24.0 00:01:09.390 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:09.390 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:09.390 [255/265] Linking target lib/librte_hash.so.24.0 00:01:09.390 [256/265] Linking target lib/librte_security.so.24.0 00:01:09.390 [257/265] Linking target lib/librte_cmdline.so.24.0 00:01:09.390 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:09.390 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:09.649 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:09.649 [261/265] Linking target lib/librte_power.so.24.0 00:01:12.944 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:12.944 [263/265] Linking static target lib/librte_vhost.a 00:01:13.512 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.512 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:13.512 INFO: autodetecting backend as ninja 00:01:13.512 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:14.469 CC lib/ut_mock/mock.o 00:01:14.469 CC lib/log/log.o 00:01:14.469 CC lib/log/log_flags.o 00:01:14.469 CC lib/log/log_deprecated.o 00:01:14.469 CC lib/ut/ut.o 00:01:14.727 LIB libspdk_ut_mock.a 00:01:14.727 SO libspdk_ut_mock.so.6.0 00:01:14.727 LIB libspdk_log.a 00:01:14.727 LIB libspdk_ut.a 00:01:14.727 SO libspdk_ut.so.2.0 00:01:14.727 SO libspdk_log.so.7.0 00:01:14.727 SYMLINK libspdk_ut_mock.so 00:01:14.727 SYMLINK libspdk_ut.so 00:01:14.727 SYMLINK libspdk_log.so 00:01:14.986 CC lib/dma/dma.o 00:01:14.986 CC lib/ioat/ioat.o 00:01:14.986 CXX lib/trace_parser/trace.o 00:01:14.986 CC lib/util/base64.o 00:01:14.986 CC lib/util/bit_array.o 00:01:14.986 CC lib/util/cpuset.o 00:01:14.986 CC lib/util/crc16.o 00:01:14.986 CC lib/util/crc32.o 00:01:14.986 CC lib/util/crc32c.o 00:01:14.986 CC lib/util/crc32_ieee.o 00:01:14.986 CC lib/util/crc64.o 00:01:14.986 CC lib/util/dif.o 00:01:14.986 CC lib/util/fd.o 00:01:14.986 CC lib/util/file.o 00:01:14.986 CC lib/util/hexlify.o 00:01:14.986 CC lib/util/iov.o 00:01:14.986 CC lib/util/math.o 00:01:14.986 CC lib/util/pipe.o 00:01:14.986 CC lib/util/strerror_tls.o 00:01:14.986 CC lib/util/string.o 00:01:14.986 CC lib/util/uuid.o 00:01:14.986 CC lib/util/fd_group.o 00:01:14.986 CC lib/util/xor.o 00:01:14.986 CC lib/util/zipf.o 00:01:14.986 CC lib/vfio_user/host/vfio_user_pci.o 00:01:14.986 CC lib/vfio_user/host/vfio_user.o 00:01:15.244 LIB libspdk_dma.a 00:01:15.244 SO libspdk_dma.so.4.0 00:01:15.244 SYMLINK libspdk_dma.so 00:01:15.244 LIB libspdk_ioat.a 00:01:15.245 SO libspdk_ioat.so.7.0 00:01:15.245 LIB libspdk_vfio_user.a 00:01:15.245 SYMLINK libspdk_ioat.so 00:01:15.245 SO libspdk_vfio_user.so.5.0 00:01:15.503 SYMLINK libspdk_vfio_user.so 00:01:15.503 LIB libspdk_util.a 00:01:15.503 SO libspdk_util.so.9.0 00:01:15.762 SYMLINK libspdk_util.so 00:01:15.762 CC lib/env_dpdk/env.o 00:01:15.762 CC lib/conf/conf.o 00:01:15.762 CC lib/idxd/idxd.o 00:01:15.762 CC lib/rdma/common.o 00:01:15.762 CC lib/json/json_parse.o 00:01:15.762 CC lib/vmd/vmd.o 00:01:15.762 CC lib/env_dpdk/memory.o 00:01:15.762 CC lib/rdma/rdma_verbs.o 00:01:15.762 CC lib/idxd/idxd_user.o 00:01:15.762 CC lib/vmd/led.o 00:01:15.762 CC lib/json/json_util.o 00:01:15.762 CC lib/env_dpdk/pci.o 00:01:15.762 CC lib/json/json_write.o 00:01:15.762 CC lib/env_dpdk/init.o 00:01:15.762 CC lib/env_dpdk/threads.o 00:01:15.762 CC lib/env_dpdk/pci_ioat.o 00:01:15.762 CC lib/env_dpdk/pci_virtio.o 00:01:15.762 CC lib/env_dpdk/pci_vmd.o 00:01:15.762 CC lib/env_dpdk/pci_idxd.o 00:01:15.762 CC lib/env_dpdk/pci_event.o 00:01:15.762 CC lib/env_dpdk/sigbus_handler.o 00:01:15.762 CC lib/env_dpdk/pci_dpdk.o 00:01:15.762 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:15.762 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:15.762 LIB libspdk_trace_parser.a 00:01:16.021 SO libspdk_trace_parser.so.5.0 00:01:16.021 SYMLINK libspdk_trace_parser.so 00:01:16.021 LIB libspdk_conf.a 00:01:16.021 SO libspdk_conf.so.6.0 00:01:16.279 LIB libspdk_json.a 00:01:16.279 LIB libspdk_rdma.a 00:01:16.279 SYMLINK libspdk_conf.so 00:01:16.279 SO libspdk_rdma.so.6.0 00:01:16.279 SO libspdk_json.so.6.0 00:01:16.279 SYMLINK libspdk_rdma.so 00:01:16.279 SYMLINK libspdk_json.so 00:01:16.538 LIB libspdk_idxd.a 00:01:16.538 CC lib/jsonrpc/jsonrpc_server.o 00:01:16.538 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:16.538 CC lib/jsonrpc/jsonrpc_client.o 00:01:16.538 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:16.538 SO libspdk_idxd.so.12.0 00:01:16.538 SYMLINK libspdk_idxd.so 00:01:16.538 LIB libspdk_vmd.a 00:01:16.538 SO libspdk_vmd.so.6.0 00:01:16.538 SYMLINK libspdk_vmd.so 00:01:16.796 LIB libspdk_jsonrpc.a 00:01:16.796 SO libspdk_jsonrpc.so.6.0 00:01:16.796 SYMLINK libspdk_jsonrpc.so 00:01:17.055 CC lib/rpc/rpc.o 00:01:17.314 LIB libspdk_rpc.a 00:01:17.314 SO libspdk_rpc.so.6.0 00:01:17.314 SYMLINK libspdk_rpc.so 00:01:17.314 CC lib/notify/notify.o 00:01:17.314 CC lib/notify/notify_rpc.o 00:01:17.314 CC lib/trace/trace.o 00:01:17.314 CC lib/keyring/keyring.o 00:01:17.314 CC lib/keyring/keyring_rpc.o 00:01:17.314 CC lib/trace/trace_flags.o 00:01:17.314 CC lib/trace/trace_rpc.o 00:01:17.573 LIB libspdk_notify.a 00:01:17.573 SO libspdk_notify.so.6.0 00:01:17.573 LIB libspdk_keyring.a 00:01:17.573 LIB libspdk_trace.a 00:01:17.573 SYMLINK libspdk_notify.so 00:01:17.573 SO libspdk_keyring.so.1.0 00:01:17.573 SO libspdk_trace.so.10.0 00:01:17.831 SYMLINK libspdk_keyring.so 00:01:17.831 SYMLINK libspdk_trace.so 00:01:17.831 LIB libspdk_env_dpdk.a 00:01:17.831 SO libspdk_env_dpdk.so.14.0 00:01:17.831 CC lib/sock/sock.o 00:01:17.831 CC lib/sock/sock_rpc.o 00:01:17.831 CC lib/thread/thread.o 00:01:17.831 CC lib/thread/iobuf.o 00:01:18.089 SYMLINK libspdk_env_dpdk.so 00:01:18.348 LIB libspdk_sock.a 00:01:18.348 SO libspdk_sock.so.9.0 00:01:18.348 SYMLINK libspdk_sock.so 00:01:18.606 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:18.606 CC lib/nvme/nvme_ctrlr.o 00:01:18.606 CC lib/nvme/nvme_fabric.o 00:01:18.606 CC lib/nvme/nvme_ns_cmd.o 00:01:18.606 CC lib/nvme/nvme_ns.o 00:01:18.606 CC lib/nvme/nvme_pcie_common.o 00:01:18.606 CC lib/nvme/nvme_pcie.o 00:01:18.606 CC lib/nvme/nvme_qpair.o 00:01:18.606 CC lib/nvme/nvme.o 00:01:18.606 CC lib/nvme/nvme_quirks.o 00:01:18.606 CC lib/nvme/nvme_transport.o 00:01:18.606 CC lib/nvme/nvme_discovery.o 00:01:18.606 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:18.606 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:18.606 CC lib/nvme/nvme_tcp.o 00:01:18.606 CC lib/nvme/nvme_opal.o 00:01:18.606 CC lib/nvme/nvme_io_msg.o 00:01:18.606 CC lib/nvme/nvme_poll_group.o 00:01:18.606 CC lib/nvme/nvme_zns.o 00:01:18.606 CC lib/nvme/nvme_stubs.o 00:01:18.606 CC lib/nvme/nvme_auth.o 00:01:18.606 CC lib/nvme/nvme_cuse.o 00:01:18.606 CC lib/nvme/nvme_vfio_user.o 00:01:18.606 CC lib/nvme/nvme_rdma.o 00:01:19.542 LIB libspdk_thread.a 00:01:19.542 SO libspdk_thread.so.10.0 00:01:19.542 SYMLINK libspdk_thread.so 00:01:19.800 CC lib/accel/accel.o 00:01:19.800 CC lib/init/json_config.o 00:01:19.800 CC lib/vfu_tgt/tgt_endpoint.o 00:01:19.800 CC lib/blob/blobstore.o 00:01:19.800 CC lib/virtio/virtio.o 00:01:19.800 CC lib/accel/accel_rpc.o 00:01:19.800 CC lib/blob/request.o 00:01:19.800 CC lib/vfu_tgt/tgt_rpc.o 00:01:19.800 CC lib/virtio/virtio_vhost_user.o 00:01:19.800 CC lib/init/subsystem.o 00:01:19.800 CC lib/blob/zeroes.o 00:01:19.800 CC lib/virtio/virtio_vfio_user.o 00:01:19.800 CC lib/accel/accel_sw.o 00:01:19.800 CC lib/blob/blob_bs_dev.o 00:01:19.800 CC lib/init/subsystem_rpc.o 00:01:19.800 CC lib/virtio/virtio_pci.o 00:01:19.800 CC lib/init/rpc.o 00:01:20.058 LIB libspdk_init.a 00:01:20.058 SO libspdk_init.so.5.0 00:01:20.058 LIB libspdk_virtio.a 00:01:20.058 LIB libspdk_vfu_tgt.a 00:01:20.058 SYMLINK libspdk_init.so 00:01:20.058 SO libspdk_vfu_tgt.so.3.0 00:01:20.058 SO libspdk_virtio.so.7.0 00:01:20.058 SYMLINK libspdk_vfu_tgt.so 00:01:20.316 SYMLINK libspdk_virtio.so 00:01:20.316 CC lib/event/app.o 00:01:20.316 CC lib/event/reactor.o 00:01:20.316 CC lib/event/log_rpc.o 00:01:20.316 CC lib/event/app_rpc.o 00:01:20.316 CC lib/event/scheduler_static.o 00:01:20.575 LIB libspdk_event.a 00:01:20.833 SO libspdk_event.so.13.0 00:01:20.833 SYMLINK libspdk_event.so 00:01:20.833 LIB libspdk_accel.a 00:01:20.833 SO libspdk_accel.so.15.0 00:01:20.833 LIB libspdk_nvme.a 00:01:20.833 SYMLINK libspdk_accel.so 00:01:21.090 SO libspdk_nvme.so.13.0 00:01:21.090 CC lib/bdev/bdev.o 00:01:21.090 CC lib/bdev/bdev_rpc.o 00:01:21.090 CC lib/bdev/bdev_zone.o 00:01:21.090 CC lib/bdev/part.o 00:01:21.090 CC lib/bdev/scsi_nvme.o 00:01:21.349 SYMLINK libspdk_nvme.so 00:01:22.721 LIB libspdk_blob.a 00:01:22.721 SO libspdk_blob.so.11.0 00:01:22.721 SYMLINK libspdk_blob.so 00:01:22.979 CC lib/lvol/lvol.o 00:01:22.979 CC lib/blobfs/blobfs.o 00:01:22.979 CC lib/blobfs/tree.o 00:01:23.545 LIB libspdk_bdev.a 00:01:23.545 LIB libspdk_blobfs.a 00:01:23.803 SO libspdk_blobfs.so.10.0 00:01:23.803 SO libspdk_bdev.so.15.0 00:01:23.803 LIB libspdk_lvol.a 00:01:23.803 SO libspdk_lvol.so.10.0 00:01:23.803 SYMLINK libspdk_blobfs.so 00:01:23.803 SYMLINK libspdk_bdev.so 00:01:23.803 SYMLINK libspdk_lvol.so 00:01:24.068 CC lib/ublk/ublk.o 00:01:24.068 CC lib/nvmf/ctrlr.o 00:01:24.068 CC lib/ublk/ublk_rpc.o 00:01:24.068 CC lib/nbd/nbd.o 00:01:24.068 CC lib/scsi/dev.o 00:01:24.068 CC lib/nbd/nbd_rpc.o 00:01:24.068 CC lib/nvmf/ctrlr_discovery.o 00:01:24.068 CC lib/ftl/ftl_core.o 00:01:24.068 CC lib/scsi/lun.o 00:01:24.068 CC lib/nvmf/ctrlr_bdev.o 00:01:24.068 CC lib/ftl/ftl_init.o 00:01:24.068 CC lib/scsi/port.o 00:01:24.068 CC lib/ftl/ftl_layout.o 00:01:24.068 CC lib/nvmf/subsystem.o 00:01:24.068 CC lib/scsi/scsi.o 00:01:24.068 CC lib/ftl/ftl_debug.o 00:01:24.068 CC lib/scsi/scsi_bdev.o 00:01:24.068 CC lib/nvmf/nvmf.o 00:01:24.068 CC lib/ftl/ftl_io.o 00:01:24.068 CC lib/nvmf/nvmf_rpc.o 00:01:24.068 CC lib/scsi/scsi_pr.o 00:01:24.068 CC lib/scsi/scsi_rpc.o 00:01:24.068 CC lib/ftl/ftl_sb.o 00:01:24.068 CC lib/nvmf/transport.o 00:01:24.068 CC lib/ftl/ftl_l2p.o 00:01:24.068 CC lib/ftl/ftl_l2p_flat.o 00:01:24.068 CC lib/scsi/task.o 00:01:24.068 CC lib/nvmf/tcp.o 00:01:24.068 CC lib/ftl/ftl_nv_cache.o 00:01:24.068 CC lib/nvmf/vfio_user.o 00:01:24.068 CC lib/nvmf/rdma.o 00:01:24.068 CC lib/ftl/ftl_band.o 00:01:24.068 CC lib/ftl/ftl_band_ops.o 00:01:24.068 CC lib/ftl/ftl_writer.o 00:01:24.068 CC lib/ftl/ftl_rq.o 00:01:24.068 CC lib/ftl/ftl_reloc.o 00:01:24.068 CC lib/ftl/ftl_l2p_cache.o 00:01:24.068 CC lib/ftl/ftl_p2l.o 00:01:24.068 CC lib/ftl/mngt/ftl_mngt.o 00:01:24.068 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:24.068 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:24.068 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:24.068 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:24.068 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:24.068 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:24.068 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:24.068 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:24.068 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:24.329 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:24.329 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:24.329 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:24.329 CC lib/ftl/utils/ftl_conf.o 00:01:24.329 CC lib/ftl/utils/ftl_md.o 00:01:24.329 CC lib/ftl/utils/ftl_mempool.o 00:01:24.329 CC lib/ftl/utils/ftl_bitmap.o 00:01:24.329 CC lib/ftl/utils/ftl_property.o 00:01:24.329 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:24.329 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:24.329 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:24.329 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:24.330 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:24.330 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:24.330 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:24.330 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:24.330 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:24.588 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:24.588 CC lib/ftl/base/ftl_base_dev.o 00:01:24.588 CC lib/ftl/base/ftl_base_bdev.o 00:01:24.588 CC lib/ftl/ftl_trace.o 00:01:24.846 LIB libspdk_nbd.a 00:01:24.846 SO libspdk_nbd.so.7.0 00:01:24.846 LIB libspdk_scsi.a 00:01:24.846 SYMLINK libspdk_nbd.so 00:01:24.846 SO libspdk_scsi.so.9.0 00:01:24.846 LIB libspdk_ublk.a 00:01:25.105 SO libspdk_ublk.so.3.0 00:01:25.105 SYMLINK libspdk_scsi.so 00:01:25.105 SYMLINK libspdk_ublk.so 00:01:25.105 CC lib/vhost/vhost.o 00:01:25.105 CC lib/iscsi/conn.o 00:01:25.105 CC lib/vhost/vhost_rpc.o 00:01:25.105 CC lib/iscsi/init_grp.o 00:01:25.105 CC lib/vhost/vhost_scsi.o 00:01:25.105 CC lib/iscsi/iscsi.o 00:01:25.105 CC lib/vhost/vhost_blk.o 00:01:25.105 CC lib/iscsi/md5.o 00:01:25.105 CC lib/iscsi/param.o 00:01:25.105 CC lib/vhost/rte_vhost_user.o 00:01:25.105 CC lib/iscsi/portal_grp.o 00:01:25.105 CC lib/iscsi/tgt_node.o 00:01:25.105 CC lib/iscsi/iscsi_subsystem.o 00:01:25.105 CC lib/iscsi/iscsi_rpc.o 00:01:25.105 CC lib/iscsi/task.o 00:01:25.363 LIB libspdk_ftl.a 00:01:25.363 SO libspdk_ftl.so.9.0 00:01:25.960 SYMLINK libspdk_ftl.so 00:01:26.526 LIB libspdk_vhost.a 00:01:26.526 SO libspdk_vhost.so.8.0 00:01:26.526 LIB libspdk_nvmf.a 00:01:26.526 SYMLINK libspdk_vhost.so 00:01:26.526 SO libspdk_nvmf.so.18.0 00:01:26.526 LIB libspdk_iscsi.a 00:01:26.526 SO libspdk_iscsi.so.8.0 00:01:26.784 SYMLINK libspdk_nvmf.so 00:01:26.784 SYMLINK libspdk_iscsi.so 00:01:27.043 CC module/vfu_device/vfu_virtio.o 00:01:27.043 CC module/vfu_device/vfu_virtio_blk.o 00:01:27.043 CC module/vfu_device/vfu_virtio_scsi.o 00:01:27.043 CC module/vfu_device/vfu_virtio_rpc.o 00:01:27.043 CC module/env_dpdk/env_dpdk_rpc.o 00:01:27.043 CC module/accel/error/accel_error.o 00:01:27.043 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:27.043 CC module/accel/dsa/accel_dsa.o 00:01:27.043 CC module/scheduler/gscheduler/gscheduler.o 00:01:27.043 CC module/accel/error/accel_error_rpc.o 00:01:27.043 CC module/accel/dsa/accel_dsa_rpc.o 00:01:27.043 CC module/accel/ioat/accel_ioat.o 00:01:27.043 CC module/accel/ioat/accel_ioat_rpc.o 00:01:27.043 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:27.043 CC module/blob/bdev/blob_bdev.o 00:01:27.043 CC module/sock/posix/posix.o 00:01:27.043 CC module/accel/iaa/accel_iaa.o 00:01:27.043 CC module/accel/iaa/accel_iaa_rpc.o 00:01:27.043 CC module/keyring/file/keyring.o 00:01:27.043 CC module/keyring/file/keyring_rpc.o 00:01:27.301 LIB libspdk_env_dpdk_rpc.a 00:01:27.301 SO libspdk_env_dpdk_rpc.so.6.0 00:01:27.301 SYMLINK libspdk_env_dpdk_rpc.so 00:01:27.301 LIB libspdk_scheduler_gscheduler.a 00:01:27.301 LIB libspdk_scheduler_dpdk_governor.a 00:01:27.301 LIB libspdk_keyring_file.a 00:01:27.301 SO libspdk_scheduler_gscheduler.so.4.0 00:01:27.301 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:27.301 SO libspdk_keyring_file.so.1.0 00:01:27.301 LIB libspdk_accel_error.a 00:01:27.301 LIB libspdk_accel_ioat.a 00:01:27.301 LIB libspdk_scheduler_dynamic.a 00:01:27.301 LIB libspdk_accel_iaa.a 00:01:27.301 SO libspdk_accel_error.so.2.0 00:01:27.301 SYMLINK libspdk_scheduler_gscheduler.so 00:01:27.301 SO libspdk_scheduler_dynamic.so.4.0 00:01:27.301 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:27.301 SO libspdk_accel_ioat.so.6.0 00:01:27.301 SYMLINK libspdk_keyring_file.so 00:01:27.301 LIB libspdk_accel_dsa.a 00:01:27.301 SO libspdk_accel_iaa.so.3.0 00:01:27.301 SO libspdk_accel_dsa.so.5.0 00:01:27.558 SYMLINK libspdk_accel_error.so 00:01:27.558 LIB libspdk_blob_bdev.a 00:01:27.558 SYMLINK libspdk_scheduler_dynamic.so 00:01:27.558 SYMLINK libspdk_accel_ioat.so 00:01:27.558 SYMLINK libspdk_accel_iaa.so 00:01:27.558 SO libspdk_blob_bdev.so.11.0 00:01:27.558 SYMLINK libspdk_accel_dsa.so 00:01:27.558 SYMLINK libspdk_blob_bdev.so 00:01:27.818 LIB libspdk_vfu_device.a 00:01:27.818 SO libspdk_vfu_device.so.3.0 00:01:27.818 CC module/blobfs/bdev/blobfs_bdev.o 00:01:27.818 CC module/bdev/raid/bdev_raid.o 00:01:27.818 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:27.818 CC module/bdev/delay/vbdev_delay.o 00:01:27.818 CC module/bdev/malloc/bdev_malloc.o 00:01:27.818 CC module/bdev/error/vbdev_error.o 00:01:27.818 CC module/bdev/raid/bdev_raid_rpc.o 00:01:27.818 CC module/bdev/gpt/gpt.o 00:01:27.818 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:27.818 CC module/bdev/iscsi/bdev_iscsi.o 00:01:27.818 CC module/bdev/raid/bdev_raid_sb.o 00:01:27.818 CC module/bdev/gpt/vbdev_gpt.o 00:01:27.818 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:27.818 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:27.818 CC module/bdev/error/vbdev_error_rpc.o 00:01:27.818 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:27.818 CC module/bdev/null/bdev_null.o 00:01:27.818 CC module/bdev/raid/raid0.o 00:01:27.818 CC module/bdev/aio/bdev_aio.o 00:01:27.818 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:27.818 CC module/bdev/aio/bdev_aio_rpc.o 00:01:27.818 CC module/bdev/passthru/vbdev_passthru.o 00:01:27.818 CC module/bdev/raid/raid1.o 00:01:27.818 CC module/bdev/null/bdev_null_rpc.o 00:01:27.818 CC module/bdev/ftl/bdev_ftl.o 00:01:27.818 CC module/bdev/raid/concat.o 00:01:27.818 CC module/bdev/lvol/vbdev_lvol.o 00:01:27.818 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:27.818 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:27.818 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:27.818 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:27.818 CC module/bdev/nvme/bdev_nvme.o 00:01:27.818 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:27.818 CC module/bdev/split/vbdev_split.o 00:01:27.818 CC module/bdev/split/vbdev_split_rpc.o 00:01:27.818 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:27.818 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:27.818 CC module/bdev/nvme/nvme_rpc.o 00:01:27.818 CC module/bdev/nvme/bdev_mdns_client.o 00:01:27.818 CC module/bdev/nvme/vbdev_opal.o 00:01:27.818 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:27.818 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:27.818 SYMLINK libspdk_vfu_device.so 00:01:27.818 LIB libspdk_sock_posix.a 00:01:28.076 SO libspdk_sock_posix.so.6.0 00:01:28.076 SYMLINK libspdk_sock_posix.so 00:01:28.076 LIB libspdk_blobfs_bdev.a 00:01:28.076 SO libspdk_blobfs_bdev.so.6.0 00:01:28.076 LIB libspdk_bdev_gpt.a 00:01:28.334 SYMLINK libspdk_blobfs_bdev.so 00:01:28.334 LIB libspdk_bdev_split.a 00:01:28.334 SO libspdk_bdev_gpt.so.6.0 00:01:28.334 LIB libspdk_bdev_error.a 00:01:28.334 LIB libspdk_bdev_passthru.a 00:01:28.334 LIB libspdk_bdev_null.a 00:01:28.334 SO libspdk_bdev_split.so.6.0 00:01:28.334 SO libspdk_bdev_error.so.6.0 00:01:28.334 SO libspdk_bdev_passthru.so.6.0 00:01:28.334 SO libspdk_bdev_null.so.6.0 00:01:28.334 LIB libspdk_bdev_ftl.a 00:01:28.334 SYMLINK libspdk_bdev_gpt.so 00:01:28.334 LIB libspdk_bdev_aio.a 00:01:28.334 SO libspdk_bdev_aio.so.6.0 00:01:28.334 SO libspdk_bdev_ftl.so.6.0 00:01:28.334 SYMLINK libspdk_bdev_split.so 00:01:28.334 LIB libspdk_bdev_zone_block.a 00:01:28.334 SYMLINK libspdk_bdev_error.so 00:01:28.334 SYMLINK libspdk_bdev_passthru.so 00:01:28.334 LIB libspdk_bdev_iscsi.a 00:01:28.334 SYMLINK libspdk_bdev_null.so 00:01:28.334 LIB libspdk_bdev_malloc.a 00:01:28.334 SO libspdk_bdev_zone_block.so.6.0 00:01:28.334 SO libspdk_bdev_iscsi.so.6.0 00:01:28.334 SO libspdk_bdev_malloc.so.6.0 00:01:28.334 SYMLINK libspdk_bdev_aio.so 00:01:28.334 SYMLINK libspdk_bdev_ftl.so 00:01:28.334 LIB libspdk_bdev_lvol.a 00:01:28.334 LIB libspdk_bdev_delay.a 00:01:28.334 SYMLINK libspdk_bdev_zone_block.so 00:01:28.334 SO libspdk_bdev_lvol.so.6.0 00:01:28.334 SYMLINK libspdk_bdev_iscsi.so 00:01:28.334 SYMLINK libspdk_bdev_malloc.so 00:01:28.334 SO libspdk_bdev_delay.so.6.0 00:01:28.593 SYMLINK libspdk_bdev_lvol.so 00:01:28.593 SYMLINK libspdk_bdev_delay.so 00:01:28.593 LIB libspdk_bdev_virtio.a 00:01:28.593 SO libspdk_bdev_virtio.so.6.0 00:01:28.593 SYMLINK libspdk_bdev_virtio.so 00:01:28.852 LIB libspdk_bdev_raid.a 00:01:28.852 SO libspdk_bdev_raid.so.6.0 00:01:28.852 SYMLINK libspdk_bdev_raid.so 00:01:30.228 LIB libspdk_bdev_nvme.a 00:01:30.228 SO libspdk_bdev_nvme.so.7.0 00:01:30.228 SYMLINK libspdk_bdev_nvme.so 00:01:30.485 CC module/event/subsystems/scheduler/scheduler.o 00:01:30.485 CC module/event/subsystems/vmd/vmd.o 00:01:30.485 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:30.485 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:30.485 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:30.485 CC module/event/subsystems/iobuf/iobuf.o 00:01:30.485 CC module/event/subsystems/keyring/keyring.o 00:01:30.485 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:30.485 CC module/event/subsystems/sock/sock.o 00:01:30.743 LIB libspdk_event_keyring.a 00:01:30.743 LIB libspdk_event_sock.a 00:01:30.743 LIB libspdk_event_vhost_blk.a 00:01:30.743 LIB libspdk_event_vfu_tgt.a 00:01:30.743 LIB libspdk_event_scheduler.a 00:01:30.743 LIB libspdk_event_vmd.a 00:01:30.743 LIB libspdk_event_iobuf.a 00:01:30.743 SO libspdk_event_sock.so.5.0 00:01:30.743 SO libspdk_event_keyring.so.1.0 00:01:30.743 SO libspdk_event_vhost_blk.so.3.0 00:01:30.743 SO libspdk_event_vfu_tgt.so.3.0 00:01:30.743 SO libspdk_event_scheduler.so.4.0 00:01:30.743 SO libspdk_event_vmd.so.6.0 00:01:30.743 SO libspdk_event_iobuf.so.3.0 00:01:30.743 SYMLINK libspdk_event_keyring.so 00:01:30.743 SYMLINK libspdk_event_sock.so 00:01:30.743 SYMLINK libspdk_event_vhost_blk.so 00:01:30.743 SYMLINK libspdk_event_vfu_tgt.so 00:01:30.743 SYMLINK libspdk_event_scheduler.so 00:01:30.743 SYMLINK libspdk_event_vmd.so 00:01:30.743 SYMLINK libspdk_event_iobuf.so 00:01:31.002 CC module/event/subsystems/accel/accel.o 00:01:31.288 LIB libspdk_event_accel.a 00:01:31.288 SO libspdk_event_accel.so.6.0 00:01:31.288 SYMLINK libspdk_event_accel.so 00:01:31.288 CC module/event/subsystems/bdev/bdev.o 00:01:31.545 LIB libspdk_event_bdev.a 00:01:31.545 SO libspdk_event_bdev.so.6.0 00:01:31.545 SYMLINK libspdk_event_bdev.so 00:01:31.803 CC module/event/subsystems/ublk/ublk.o 00:01:31.803 CC module/event/subsystems/nbd/nbd.o 00:01:31.803 CC module/event/subsystems/scsi/scsi.o 00:01:31.803 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:31.803 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:32.061 LIB libspdk_event_nbd.a 00:01:32.061 LIB libspdk_event_ublk.a 00:01:32.061 SO libspdk_event_nbd.so.6.0 00:01:32.061 LIB libspdk_event_scsi.a 00:01:32.061 SO libspdk_event_ublk.so.3.0 00:01:32.061 SO libspdk_event_scsi.so.6.0 00:01:32.061 SYMLINK libspdk_event_nbd.so 00:01:32.061 SYMLINK libspdk_event_ublk.so 00:01:32.061 SYMLINK libspdk_event_scsi.so 00:01:32.061 LIB libspdk_event_nvmf.a 00:01:32.061 SO libspdk_event_nvmf.so.6.0 00:01:32.061 SYMLINK libspdk_event_nvmf.so 00:01:32.319 CC module/event/subsystems/iscsi/iscsi.o 00:01:32.319 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:32.319 LIB libspdk_event_vhost_scsi.a 00:01:32.319 LIB libspdk_event_iscsi.a 00:01:32.319 SO libspdk_event_vhost_scsi.so.3.0 00:01:32.319 SO libspdk_event_iscsi.so.6.0 00:01:32.319 SYMLINK libspdk_event_vhost_scsi.so 00:01:32.579 SYMLINK libspdk_event_iscsi.so 00:01:32.579 SO libspdk.so.6.0 00:01:32.579 SYMLINK libspdk.so 00:01:32.853 CC app/trace_record/trace_record.o 00:01:32.853 CXX app/trace/trace.o 00:01:32.853 CC app/spdk_nvme_perf/perf.o 00:01:32.853 CC app/spdk_nvme_identify/identify.o 00:01:32.853 CC app/spdk_lspci/spdk_lspci.o 00:01:32.853 TEST_HEADER include/spdk/accel.h 00:01:32.853 TEST_HEADER include/spdk/accel_module.h 00:01:32.853 CC test/rpc_client/rpc_client_test.o 00:01:32.853 TEST_HEADER include/spdk/assert.h 00:01:32.853 CC app/spdk_nvme_discover/discovery_aer.o 00:01:32.853 CC app/spdk_top/spdk_top.o 00:01:32.853 TEST_HEADER include/spdk/barrier.h 00:01:32.853 TEST_HEADER include/spdk/base64.h 00:01:32.853 TEST_HEADER include/spdk/bdev.h 00:01:32.853 TEST_HEADER include/spdk/bdev_module.h 00:01:32.853 TEST_HEADER include/spdk/bdev_zone.h 00:01:32.853 TEST_HEADER include/spdk/bit_array.h 00:01:32.853 TEST_HEADER include/spdk/bit_pool.h 00:01:32.853 TEST_HEADER include/spdk/blob_bdev.h 00:01:32.853 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:32.853 TEST_HEADER include/spdk/blobfs.h 00:01:32.853 TEST_HEADER include/spdk/blob.h 00:01:32.853 TEST_HEADER include/spdk/conf.h 00:01:32.853 TEST_HEADER include/spdk/config.h 00:01:32.853 TEST_HEADER include/spdk/cpuset.h 00:01:32.853 TEST_HEADER include/spdk/crc16.h 00:01:32.853 TEST_HEADER include/spdk/crc32.h 00:01:32.854 TEST_HEADER include/spdk/crc64.h 00:01:32.854 TEST_HEADER include/spdk/dif.h 00:01:32.854 CC app/spdk_dd/spdk_dd.o 00:01:32.854 TEST_HEADER include/spdk/dma.h 00:01:32.854 TEST_HEADER include/spdk/endian.h 00:01:32.854 CC app/nvmf_tgt/nvmf_main.o 00:01:32.854 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:32.854 TEST_HEADER include/spdk/env_dpdk.h 00:01:32.854 TEST_HEADER include/spdk/env.h 00:01:32.854 TEST_HEADER include/spdk/event.h 00:01:32.854 TEST_HEADER include/spdk/fd_group.h 00:01:32.854 TEST_HEADER include/spdk/fd.h 00:01:32.854 TEST_HEADER include/spdk/file.h 00:01:32.854 CC app/iscsi_tgt/iscsi_tgt.o 00:01:32.854 TEST_HEADER include/spdk/ftl.h 00:01:32.854 TEST_HEADER include/spdk/gpt_spec.h 00:01:32.854 CC app/vhost/vhost.o 00:01:32.854 TEST_HEADER include/spdk/hexlify.h 00:01:32.854 TEST_HEADER include/spdk/histogram_data.h 00:01:32.854 TEST_HEADER include/spdk/idxd.h 00:01:32.854 TEST_HEADER include/spdk/idxd_spec.h 00:01:32.854 TEST_HEADER include/spdk/init.h 00:01:32.854 TEST_HEADER include/spdk/ioat.h 00:01:32.854 TEST_HEADER include/spdk/ioat_spec.h 00:01:32.854 TEST_HEADER include/spdk/iscsi_spec.h 00:01:32.854 CC examples/vmd/led/led.o 00:01:32.854 CC examples/nvme/hello_world/hello_world.o 00:01:32.854 TEST_HEADER include/spdk/json.h 00:01:32.854 TEST_HEADER include/spdk/jsonrpc.h 00:01:32.854 CC examples/vmd/lsvmd/lsvmd.o 00:01:32.854 CC examples/ioat/perf/perf.o 00:01:32.854 TEST_HEADER include/spdk/keyring.h 00:01:32.854 CC examples/nvme/arbitration/arbitration.o 00:01:32.854 CC examples/ioat/verify/verify.o 00:01:32.854 CC examples/nvme/hotplug/hotplug.o 00:01:32.854 TEST_HEADER include/spdk/keyring_module.h 00:01:32.854 CC app/spdk_tgt/spdk_tgt.o 00:01:32.854 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:32.854 CC examples/nvme/reconnect/reconnect.o 00:01:32.854 TEST_HEADER include/spdk/likely.h 00:01:32.854 CC examples/util/zipf/zipf.o 00:01:32.854 TEST_HEADER include/spdk/log.h 00:01:32.854 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:32.854 CC app/fio/nvme/fio_plugin.o 00:01:32.854 TEST_HEADER include/spdk/lvol.h 00:01:32.854 CC examples/accel/perf/accel_perf.o 00:01:32.854 TEST_HEADER include/spdk/memory.h 00:01:32.854 CC test/event/event_perf/event_perf.o 00:01:32.854 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:32.854 CC examples/nvme/abort/abort.o 00:01:32.854 TEST_HEADER include/spdk/mmio.h 00:01:32.854 CC examples/sock/hello_world/hello_sock.o 00:01:32.854 TEST_HEADER include/spdk/nbd.h 00:01:32.854 CC examples/idxd/perf/perf.o 00:01:32.854 TEST_HEADER include/spdk/notify.h 00:01:32.854 CC test/thread/poller_perf/poller_perf.o 00:01:32.854 CC test/nvme/aer/aer.o 00:01:32.854 TEST_HEADER include/spdk/nvme.h 00:01:32.854 TEST_HEADER include/spdk/nvme_intel.h 00:01:32.854 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:32.854 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:33.122 TEST_HEADER include/spdk/nvme_spec.h 00:01:33.122 TEST_HEADER include/spdk/nvme_zns.h 00:01:33.122 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:33.122 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:33.122 TEST_HEADER include/spdk/nvmf.h 00:01:33.122 TEST_HEADER include/spdk/nvmf_spec.h 00:01:33.122 TEST_HEADER include/spdk/nvmf_transport.h 00:01:33.122 CC examples/nvmf/nvmf/nvmf.o 00:01:33.122 CC examples/thread/thread/thread_ex.o 00:01:33.122 CC test/bdev/bdevio/bdevio.o 00:01:33.122 TEST_HEADER include/spdk/opal.h 00:01:33.122 CC examples/bdev/hello_world/hello_bdev.o 00:01:33.122 CC examples/bdev/bdevperf/bdevperf.o 00:01:33.122 TEST_HEADER include/spdk/opal_spec.h 00:01:33.122 CC test/blobfs/mkfs/mkfs.o 00:01:33.122 TEST_HEADER include/spdk/pci_ids.h 00:01:33.122 CC examples/blob/cli/blobcli.o 00:01:33.122 CC examples/blob/hello_world/hello_blob.o 00:01:33.122 CC test/dma/test_dma/test_dma.o 00:01:33.122 CC test/accel/dif/dif.o 00:01:33.122 TEST_HEADER include/spdk/pipe.h 00:01:33.122 TEST_HEADER include/spdk/queue.h 00:01:33.122 TEST_HEADER include/spdk/reduce.h 00:01:33.122 CC test/app/bdev_svc/bdev_svc.o 00:01:33.122 TEST_HEADER include/spdk/rpc.h 00:01:33.122 TEST_HEADER include/spdk/scheduler.h 00:01:33.122 TEST_HEADER include/spdk/scsi.h 00:01:33.122 TEST_HEADER include/spdk/scsi_spec.h 00:01:33.122 TEST_HEADER include/spdk/sock.h 00:01:33.122 TEST_HEADER include/spdk/stdinc.h 00:01:33.122 TEST_HEADER include/spdk/string.h 00:01:33.122 TEST_HEADER include/spdk/thread.h 00:01:33.122 TEST_HEADER include/spdk/trace.h 00:01:33.122 LINK spdk_lspci 00:01:33.122 TEST_HEADER include/spdk/trace_parser.h 00:01:33.122 TEST_HEADER include/spdk/tree.h 00:01:33.122 TEST_HEADER include/spdk/ublk.h 00:01:33.122 TEST_HEADER include/spdk/util.h 00:01:33.122 CC test/lvol/esnap/esnap.o 00:01:33.122 TEST_HEADER include/spdk/uuid.h 00:01:33.122 TEST_HEADER include/spdk/version.h 00:01:33.122 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:33.122 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:33.122 TEST_HEADER include/spdk/vhost.h 00:01:33.122 TEST_HEADER include/spdk/vmd.h 00:01:33.122 TEST_HEADER include/spdk/xor.h 00:01:33.122 TEST_HEADER include/spdk/zipf.h 00:01:33.122 CC test/env/mem_callbacks/mem_callbacks.o 00:01:33.122 CXX test/cpp_headers/accel.o 00:01:33.122 LINK rpc_client_test 00:01:33.122 LINK lsvmd 00:01:33.122 LINK spdk_nvme_discover 00:01:33.122 LINK led 00:01:33.122 LINK interrupt_tgt 00:01:33.388 LINK nvmf_tgt 00:01:33.388 LINK event_perf 00:01:33.388 LINK zipf 00:01:33.388 LINK vhost 00:01:33.388 LINK spdk_trace_record 00:01:33.388 LINK poller_perf 00:01:33.388 LINK iscsi_tgt 00:01:33.388 LINK cmb_copy 00:01:33.388 LINK pmr_persistence 00:01:33.388 LINK verify 00:01:33.388 LINK ioat_perf 00:01:33.388 LINK hello_world 00:01:33.388 LINK spdk_tgt 00:01:33.388 LINK hotplug 00:01:33.388 LINK bdev_svc 00:01:33.388 LINK mkfs 00:01:33.388 LINK hello_sock 00:01:33.388 LINK hello_bdev 00:01:33.388 LINK thread 00:01:33.650 LINK aer 00:01:33.650 CXX test/cpp_headers/accel_module.o 00:01:33.650 LINK hello_blob 00:01:33.650 CXX test/cpp_headers/assert.o 00:01:33.650 LINK arbitration 00:01:33.650 LINK spdk_dd 00:01:33.650 LINK nvmf 00:01:33.650 LINK reconnect 00:01:33.650 LINK idxd_perf 00:01:33.650 CC test/app/histogram_perf/histogram_perf.o 00:01:33.650 LINK abort 00:01:33.650 LINK spdk_trace 00:01:33.650 CC test/event/reactor/reactor.o 00:01:33.650 CXX test/cpp_headers/barrier.o 00:01:33.650 CC test/event/reactor_perf/reactor_perf.o 00:01:33.650 LINK dif 00:01:33.650 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:33.650 CC test/nvme/reset/reset.o 00:01:33.650 LINK bdevio 00:01:33.650 CC test/app/jsoncat/jsoncat.o 00:01:33.650 LINK test_dma 00:01:33.650 CXX test/cpp_headers/base64.o 00:01:33.650 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:33.912 CC test/event/app_repeat/app_repeat.o 00:01:33.912 CC app/fio/bdev/fio_plugin.o 00:01:33.912 CC test/env/vtophys/vtophys.o 00:01:33.912 CXX test/cpp_headers/bdev.o 00:01:33.912 CXX test/cpp_headers/bdev_module.o 00:01:33.912 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:33.912 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:33.912 CC test/app/stub/stub.o 00:01:33.912 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:33.912 LINK accel_perf 00:01:33.912 CXX test/cpp_headers/bdev_zone.o 00:01:33.912 LINK nvme_manage 00:01:33.912 CC test/event/scheduler/scheduler.o 00:01:33.912 CC test/env/pci/pci_ut.o 00:01:33.912 CXX test/cpp_headers/bit_array.o 00:01:33.912 CC test/nvme/sgl/sgl.o 00:01:33.912 CXX test/cpp_headers/bit_pool.o 00:01:33.912 CXX test/cpp_headers/blob_bdev.o 00:01:33.912 CXX test/cpp_headers/blobfs_bdev.o 00:01:33.912 CC test/env/memory/memory_ut.o 00:01:33.912 CXX test/cpp_headers/blobfs.o 00:01:33.912 CXX test/cpp_headers/blob.o 00:01:33.912 LINK spdk_nvme 00:01:33.912 LINK blobcli 00:01:33.912 LINK histogram_perf 00:01:33.912 CXX test/cpp_headers/conf.o 00:01:33.912 LINK reactor 00:01:33.912 LINK reactor_perf 00:01:34.182 CC test/nvme/e2edp/nvme_dp.o 00:01:34.182 CXX test/cpp_headers/config.o 00:01:34.182 LINK jsoncat 00:01:34.182 CC test/nvme/overhead/overhead.o 00:01:34.182 CXX test/cpp_headers/cpuset.o 00:01:34.182 LINK app_repeat 00:01:34.182 CXX test/cpp_headers/crc16.o 00:01:34.182 CC test/nvme/err_injection/err_injection.o 00:01:34.182 LINK vtophys 00:01:34.182 CXX test/cpp_headers/crc32.o 00:01:34.182 CC test/nvme/startup/startup.o 00:01:34.182 CXX test/cpp_headers/crc64.o 00:01:34.182 LINK env_dpdk_post_init 00:01:34.182 CC test/nvme/reserve/reserve.o 00:01:34.182 LINK stub 00:01:34.182 LINK reset 00:01:34.182 LINK mem_callbacks 00:01:34.182 CXX test/cpp_headers/dif.o 00:01:34.182 CC test/nvme/simple_copy/simple_copy.o 00:01:34.182 CC test/nvme/connect_stress/connect_stress.o 00:01:34.182 CXX test/cpp_headers/dma.o 00:01:34.444 CXX test/cpp_headers/endian.o 00:01:34.444 CC test/nvme/boot_partition/boot_partition.o 00:01:34.444 CXX test/cpp_headers/env_dpdk.o 00:01:34.444 CXX test/cpp_headers/env.o 00:01:34.444 CC test/nvme/compliance/nvme_compliance.o 00:01:34.444 CXX test/cpp_headers/event.o 00:01:34.444 CXX test/cpp_headers/fd_group.o 00:01:34.444 CXX test/cpp_headers/fd.o 00:01:34.444 LINK spdk_nvme_perf 00:01:34.444 CXX test/cpp_headers/file.o 00:01:34.444 CC test/nvme/fused_ordering/fused_ordering.o 00:01:34.444 LINK scheduler 00:01:34.444 CXX test/cpp_headers/ftl.o 00:01:34.444 CXX test/cpp_headers/gpt_spec.o 00:01:34.444 LINK spdk_nvme_identify 00:01:34.444 CXX test/cpp_headers/hexlify.o 00:01:34.444 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:34.444 CC test/nvme/fdp/fdp.o 00:01:34.444 CC test/nvme/cuse/cuse.o 00:01:34.444 LINK sgl 00:01:34.444 CXX test/cpp_headers/histogram_data.o 00:01:34.444 CXX test/cpp_headers/idxd.o 00:01:34.444 LINK spdk_top 00:01:34.444 CXX test/cpp_headers/idxd_spec.o 00:01:34.444 LINK bdevperf 00:01:34.444 CXX test/cpp_headers/init.o 00:01:34.444 CXX test/cpp_headers/ioat.o 00:01:34.444 LINK nvme_fuzz 00:01:34.444 CXX test/cpp_headers/ioat_spec.o 00:01:34.444 CXX test/cpp_headers/iscsi_spec.o 00:01:34.444 CXX test/cpp_headers/json.o 00:01:34.702 LINK startup 00:01:34.702 LINK err_injection 00:01:34.702 LINK nvme_dp 00:01:34.702 CXX test/cpp_headers/jsonrpc.o 00:01:34.702 CXX test/cpp_headers/keyring.o 00:01:34.702 LINK overhead 00:01:34.702 LINK reserve 00:01:34.702 LINK pci_ut 00:01:34.702 LINK vhost_fuzz 00:01:34.702 LINK connect_stress 00:01:34.702 CXX test/cpp_headers/keyring_module.o 00:01:34.702 CXX test/cpp_headers/likely.o 00:01:34.702 LINK boot_partition 00:01:34.702 CXX test/cpp_headers/log.o 00:01:34.702 CXX test/cpp_headers/lvol.o 00:01:34.702 CXX test/cpp_headers/memory.o 00:01:34.702 CXX test/cpp_headers/mmio.o 00:01:34.702 CXX test/cpp_headers/nbd.o 00:01:34.703 CXX test/cpp_headers/notify.o 00:01:34.703 CXX test/cpp_headers/nvme.o 00:01:34.703 CXX test/cpp_headers/nvme_intel.o 00:01:34.703 CXX test/cpp_headers/nvme_ocssd.o 00:01:34.703 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:34.703 LINK spdk_bdev 00:01:34.703 CXX test/cpp_headers/nvme_spec.o 00:01:34.703 CXX test/cpp_headers/nvme_zns.o 00:01:34.703 CXX test/cpp_headers/nvmf_cmd.o 00:01:34.703 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:34.703 LINK simple_copy 00:01:34.703 LINK fused_ordering 00:01:34.965 CXX test/cpp_headers/nvmf.o 00:01:34.965 LINK doorbell_aers 00:01:34.965 CXX test/cpp_headers/nvmf_spec.o 00:01:34.965 CXX test/cpp_headers/nvmf_transport.o 00:01:34.965 CXX test/cpp_headers/opal.o 00:01:34.965 CXX test/cpp_headers/opal_spec.o 00:01:34.965 CXX test/cpp_headers/pci_ids.o 00:01:34.965 CXX test/cpp_headers/pipe.o 00:01:34.965 CXX test/cpp_headers/queue.o 00:01:34.965 CXX test/cpp_headers/reduce.o 00:01:34.965 CXX test/cpp_headers/rpc.o 00:01:34.965 CXX test/cpp_headers/scheduler.o 00:01:34.965 CXX test/cpp_headers/scsi_spec.o 00:01:34.965 CXX test/cpp_headers/scsi.o 00:01:34.965 CXX test/cpp_headers/sock.o 00:01:34.965 CXX test/cpp_headers/stdinc.o 00:01:34.965 CXX test/cpp_headers/string.o 00:01:34.965 CXX test/cpp_headers/thread.o 00:01:34.965 CXX test/cpp_headers/trace.o 00:01:34.965 CXX test/cpp_headers/trace_parser.o 00:01:34.965 CXX test/cpp_headers/tree.o 00:01:34.965 LINK nvme_compliance 00:01:34.965 CXX test/cpp_headers/ublk.o 00:01:34.965 CXX test/cpp_headers/util.o 00:01:34.965 CXX test/cpp_headers/uuid.o 00:01:34.965 CXX test/cpp_headers/version.o 00:01:34.965 LINK fdp 00:01:34.965 CXX test/cpp_headers/vfio_user_pci.o 00:01:34.965 CXX test/cpp_headers/vfio_user_spec.o 00:01:35.224 CXX test/cpp_headers/vhost.o 00:01:35.224 CXX test/cpp_headers/vmd.o 00:01:35.224 CXX test/cpp_headers/xor.o 00:01:35.224 CXX test/cpp_headers/zipf.o 00:01:35.482 LINK memory_ut 00:01:36.048 LINK cuse 00:01:36.048 LINK iscsi_fuzz 00:01:38.576 LINK esnap 00:01:38.833 00:01:38.833 real 0m47.875s 00:01:38.833 user 10m0.491s 00:01:38.833 sys 2m25.176s 00:01:38.833 16:48:54 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:38.833 16:48:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.833 ************************************ 00:01:38.833 END TEST make 00:01:38.833 ************************************ 00:01:38.833 16:48:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:38.833 16:48:54 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:38.833 16:48:54 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:38.833 16:48:54 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.833 16:48:54 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:38.833 16:48:54 -- pm/common@45 -- $ pid=1481173 00:01:38.833 16:48:54 -- pm/common@52 -- $ sudo kill -TERM 1481173 00:01:39.092 16:48:54 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.092 16:48:54 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:39.092 16:48:54 -- pm/common@45 -- $ pid=1481174 00:01:39.092 16:48:54 -- pm/common@52 -- $ sudo kill -TERM 1481174 00:01:39.092 16:48:54 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.092 16:48:54 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:39.092 16:48:54 -- pm/common@45 -- $ pid=1481175 00:01:39.092 16:48:54 -- pm/common@52 -- $ sudo kill -TERM 1481175 00:01:39.092 16:48:54 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.092 16:48:54 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:39.092 16:48:54 -- pm/common@45 -- $ pid=1481176 00:01:39.092 16:48:54 -- pm/common@52 -- $ sudo kill -TERM 1481176 00:01:39.092 16:48:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:39.092 16:48:54 -- nvmf/common.sh@7 -- # uname -s 00:01:39.092 16:48:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:39.092 16:48:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:39.092 16:48:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:39.092 16:48:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:39.092 16:48:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:39.092 16:48:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:39.092 16:48:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:39.092 16:48:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:39.092 16:48:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:39.092 16:48:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:39.092 16:48:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:39.092 16:48:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:39.092 16:48:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:39.092 16:48:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:39.092 16:48:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:39.092 16:48:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:39.092 16:48:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:39.092 16:48:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:39.092 16:48:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.092 16:48:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.092 16:48:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.092 16:48:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.092 16:48:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.092 16:48:54 -- paths/export.sh@5 -- # export PATH 00:01:39.092 16:48:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.092 16:48:54 -- nvmf/common.sh@47 -- # : 0 00:01:39.092 16:48:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:39.092 16:48:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:39.092 16:48:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:39.092 16:48:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:39.092 16:48:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:39.092 16:48:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:39.092 16:48:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:39.092 16:48:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:39.092 16:48:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:39.092 16:48:54 -- spdk/autotest.sh@32 -- # uname -s 00:01:39.092 16:48:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:39.092 16:48:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:39.092 16:48:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:39.092 16:48:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:39.092 16:48:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:39.092 16:48:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:39.092 16:48:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:39.092 16:48:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:39.092 16:48:54 -- spdk/autotest.sh@48 -- # udevadm_pid=1536409 00:01:39.092 16:48:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:39.092 16:48:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:39.092 16:48:54 -- pm/common@17 -- # local monitor 00:01:39.092 16:48:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.092 16:48:54 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1536412 00:01:39.092 16:48:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.092 16:48:54 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1536414 00:01:39.092 16:48:54 -- pm/common@21 -- # date +%s 00:01:39.092 16:48:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.092 16:48:54 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1536417 00:01:39.092 16:48:54 -- pm/common@21 -- # date +%s 00:01:39.092 16:48:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.092 16:48:54 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1536421 00:01:39.092 16:48:54 -- pm/common@26 -- # sleep 1 00:01:39.092 16:48:54 -- pm/common@21 -- # date +%s 00:01:39.092 16:48:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713451734 00:01:39.092 16:48:54 -- pm/common@21 -- # date +%s 00:01:39.092 16:48:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713451734 00:01:39.092 16:48:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713451734 00:01:39.092 16:48:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713451734 00:01:39.092 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713451734_collect-vmstat.pm.log 00:01:39.092 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713451734_collect-bmc-pm.bmc.pm.log 00:01:39.092 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713451734_collect-cpu-load.pm.log 00:01:39.092 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713451734_collect-cpu-temp.pm.log 00:01:40.026 16:48:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:40.026 16:48:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:40.026 16:48:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:40.026 16:48:55 -- common/autotest_common.sh@10 -- # set +x 00:01:40.026 16:48:55 -- spdk/autotest.sh@59 -- # create_test_list 00:01:40.026 16:48:55 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:40.026 16:48:55 -- common/autotest_common.sh@10 -- # set +x 00:01:40.284 16:48:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:40.284 16:48:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.284 16:48:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.284 16:48:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:40.284 16:48:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.284 16:48:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:40.284 16:48:55 -- common/autotest_common.sh@1441 -- # uname 00:01:40.284 16:48:55 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:40.284 16:48:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:40.284 16:48:55 -- common/autotest_common.sh@1461 -- # uname 00:01:40.284 16:48:55 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:40.284 16:48:55 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:40.284 16:48:55 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:40.284 16:48:55 -- spdk/autotest.sh@72 -- # hash lcov 00:01:40.284 16:48:55 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:40.284 16:48:55 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:40.284 --rc lcov_branch_coverage=1 00:01:40.284 --rc lcov_function_coverage=1 00:01:40.284 --rc genhtml_branch_coverage=1 00:01:40.284 --rc genhtml_function_coverage=1 00:01:40.284 --rc genhtml_legend=1 00:01:40.284 --rc geninfo_all_blocks=1 00:01:40.284 ' 00:01:40.284 16:48:55 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:40.284 --rc lcov_branch_coverage=1 00:01:40.284 --rc lcov_function_coverage=1 00:01:40.284 --rc genhtml_branch_coverage=1 00:01:40.284 --rc genhtml_function_coverage=1 00:01:40.284 --rc genhtml_legend=1 00:01:40.284 --rc geninfo_all_blocks=1 00:01:40.284 ' 00:01:40.284 16:48:55 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:40.284 --rc lcov_branch_coverage=1 00:01:40.284 --rc lcov_function_coverage=1 00:01:40.284 --rc genhtml_branch_coverage=1 00:01:40.284 --rc genhtml_function_coverage=1 00:01:40.284 --rc genhtml_legend=1 00:01:40.284 --rc geninfo_all_blocks=1 00:01:40.284 --no-external' 00:01:40.284 16:48:55 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:40.284 --rc lcov_branch_coverage=1 00:01:40.284 --rc lcov_function_coverage=1 00:01:40.284 --rc genhtml_branch_coverage=1 00:01:40.284 --rc genhtml_function_coverage=1 00:01:40.284 --rc genhtml_legend=1 00:01:40.284 --rc geninfo_all_blocks=1 00:01:40.284 --no-external' 00:01:40.284 16:48:55 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:40.284 lcov: LCOV version 1.14 00:01:40.284 16:48:55 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:52.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:52.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:01:53.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:53.866 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:01:53.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:53.866 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:01:53.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:53.866 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:11.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:11.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:11.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:11.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:11.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:11.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:11.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:11.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:11.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:11.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:11.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:11.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:11.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:11.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:11.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:11.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:11.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:11.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:11.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:11.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:11.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:11.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:11.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:11.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:11.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:11.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:11.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:11.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:12.508 16:49:28 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:12.508 16:49:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:12.508 16:49:28 -- common/autotest_common.sh@10 -- # set +x 00:02:12.508 16:49:28 -- spdk/autotest.sh@91 -- # rm -f 00:02:12.508 16:49:28 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:13.885 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:13.885 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:13.885 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:13.885 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:13.885 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:13.885 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:13.885 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:13.885 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:13.885 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:13.885 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:13.885 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:13.885 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:13.885 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:13.885 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:13.885 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:13.885 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:13.885 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:13.885 16:49:29 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:13.885 16:49:29 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:13.885 16:49:29 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:13.885 16:49:29 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:13.885 16:49:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:13.885 16:49:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:13.885 16:49:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:13.885 16:49:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:13.885 16:49:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:13.885 16:49:29 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:13.885 16:49:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:13.885 16:49:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:13.885 16:49:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:13.885 16:49:29 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:13.885 16:49:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:13.885 No valid GPT data, bailing 00:02:13.885 16:49:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:13.885 16:49:29 -- scripts/common.sh@391 -- # pt= 00:02:13.885 16:49:29 -- scripts/common.sh@392 -- # return 1 00:02:13.885 16:49:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:13.885 1+0 records in 00:02:13.885 1+0 records out 00:02:13.885 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00257937 s, 407 MB/s 00:02:13.885 16:49:29 -- spdk/autotest.sh@118 -- # sync 00:02:13.885 16:49:29 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:13.885 16:49:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:13.885 16:49:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:15.785 16:49:31 -- spdk/autotest.sh@124 -- # uname -s 00:02:15.785 16:49:31 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:15.785 16:49:31 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:15.785 16:49:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:15.785 16:49:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:15.785 16:49:31 -- common/autotest_common.sh@10 -- # set +x 00:02:15.785 ************************************ 00:02:15.785 START TEST setup.sh 00:02:15.785 ************************************ 00:02:15.785 16:49:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:15.785 * Looking for test storage... 00:02:15.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:15.785 16:49:31 -- setup/test-setup.sh@10 -- # uname -s 00:02:15.785 16:49:31 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:15.785 16:49:31 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:15.785 16:49:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:15.785 16:49:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:15.785 16:49:31 -- common/autotest_common.sh@10 -- # set +x 00:02:16.043 ************************************ 00:02:16.043 START TEST acl 00:02:16.043 ************************************ 00:02:16.043 16:49:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:16.043 * Looking for test storage... 00:02:16.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:16.043 16:49:31 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:16.043 16:49:31 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:16.043 16:49:31 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:16.043 16:49:31 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:16.043 16:49:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:16.043 16:49:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:16.043 16:49:31 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:16.043 16:49:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:16.043 16:49:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:16.043 16:49:31 -- setup/acl.sh@12 -- # devs=() 00:02:16.043 16:49:31 -- setup/acl.sh@12 -- # declare -a devs 00:02:16.043 16:49:31 -- setup/acl.sh@13 -- # drivers=() 00:02:16.043 16:49:31 -- setup/acl.sh@13 -- # declare -A drivers 00:02:16.043 16:49:31 -- setup/acl.sh@51 -- # setup reset 00:02:16.043 16:49:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:16.043 16:49:31 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:17.419 16:49:33 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:17.419 16:49:33 -- setup/acl.sh@16 -- # local dev driver 00:02:17.419 16:49:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.419 16:49:33 -- setup/acl.sh@15 -- # setup output status 00:02:17.419 16:49:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:17.419 16:49:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:18.793 Hugepages 00:02:18.793 node hugesize free / total 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 00:02:18.793 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # continue 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:18.793 16:49:34 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:18.793 16:49:34 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:18.793 16:49:34 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:18.793 16:49:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.793 16:49:34 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:18.793 16:49:34 -- setup/acl.sh@54 -- # run_test denied denied 00:02:18.793 16:49:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:18.793 16:49:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:18.793 16:49:34 -- common/autotest_common.sh@10 -- # set +x 00:02:18.793 ************************************ 00:02:18.793 START TEST denied 00:02:18.793 ************************************ 00:02:18.793 16:49:34 -- common/autotest_common.sh@1111 -- # denied 00:02:18.793 16:49:34 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:18.793 16:49:34 -- setup/acl.sh@38 -- # setup output config 00:02:18.793 16:49:34 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:18.793 16:49:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:18.793 16:49:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:20.169 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:20.169 16:49:35 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:20.169 16:49:35 -- setup/acl.sh@28 -- # local dev driver 00:02:20.169 16:49:35 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:20.169 16:49:35 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:20.169 16:49:35 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:20.169 16:49:35 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:20.169 16:49:35 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:20.169 16:49:35 -- setup/acl.sh@41 -- # setup reset 00:02:20.169 16:49:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:20.169 16:49:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:22.767 00:02:22.767 real 0m3.594s 00:02:22.767 user 0m1.080s 00:02:22.767 sys 0m1.689s 00:02:22.767 16:49:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:22.767 16:49:37 -- common/autotest_common.sh@10 -- # set +x 00:02:22.767 ************************************ 00:02:22.767 END TEST denied 00:02:22.767 ************************************ 00:02:22.767 16:49:37 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:22.767 16:49:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:22.767 16:49:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:22.767 16:49:37 -- common/autotest_common.sh@10 -- # set +x 00:02:22.767 ************************************ 00:02:22.767 START TEST allowed 00:02:22.767 ************************************ 00:02:22.767 16:49:38 -- common/autotest_common.sh@1111 -- # allowed 00:02:22.767 16:49:38 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:22.767 16:49:38 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:22.767 16:49:38 -- setup/acl.sh@45 -- # setup output config 00:02:22.767 16:49:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:22.767 16:49:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:25.300 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:25.300 16:49:40 -- setup/acl.sh@47 -- # verify 00:02:25.300 16:49:40 -- setup/acl.sh@28 -- # local dev driver 00:02:25.300 16:49:40 -- setup/acl.sh@48 -- # setup reset 00:02:25.300 16:49:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:25.301 16:49:40 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:26.237 00:02:26.237 real 0m3.661s 00:02:26.237 user 0m0.970s 00:02:26.237 sys 0m1.586s 00:02:26.237 16:49:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:26.237 16:49:41 -- common/autotest_common.sh@10 -- # set +x 00:02:26.237 ************************************ 00:02:26.237 END TEST allowed 00:02:26.237 ************************************ 00:02:26.237 00:02:26.237 real 0m10.246s 00:02:26.237 user 0m3.211s 00:02:26.237 sys 0m5.157s 00:02:26.237 16:49:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:26.237 16:49:41 -- common/autotest_common.sh@10 -- # set +x 00:02:26.237 ************************************ 00:02:26.237 END TEST acl 00:02:26.237 ************************************ 00:02:26.237 16:49:41 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:26.237 16:49:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:26.237 16:49:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:26.237 16:49:41 -- common/autotest_common.sh@10 -- # set +x 00:02:26.238 ************************************ 00:02:26.238 START TEST hugepages 00:02:26.238 ************************************ 00:02:26.238 16:49:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:26.238 * Looking for test storage... 00:02:26.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:26.498 16:49:41 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:26.498 16:49:41 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:26.498 16:49:41 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:26.498 16:49:41 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:26.498 16:49:41 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:26.498 16:49:41 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:26.498 16:49:41 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:26.498 16:49:41 -- setup/common.sh@18 -- # local node= 00:02:26.498 16:49:41 -- setup/common.sh@19 -- # local var val 00:02:26.498 16:49:41 -- setup/common.sh@20 -- # local mem_f mem 00:02:26.498 16:49:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:26.498 16:49:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:26.498 16:49:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:26.498 16:49:41 -- setup/common.sh@28 -- # mapfile -t mem 00:02:26.498 16:49:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38349504 kB' 'MemAvailable: 42026060 kB' 'Buffers: 2696 kB' 'Cached: 15710452 kB' 'SwapCached: 0 kB' 'Active: 12650836 kB' 'Inactive: 3488624 kB' 'Active(anon): 12066120 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 429508 kB' 'Mapped: 188348 kB' 'Shmem: 11639808 kB' 'KReclaimable: 194108 kB' 'Slab: 554320 kB' 'SReclaimable: 194108 kB' 'SUnreclaim: 360212 kB' 'KernelStack: 12880 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 13218484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.498 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.498 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # continue 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.499 16:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.499 16:49:41 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.499 16:49:41 -- setup/common.sh@33 -- # echo 2048 00:02:26.499 16:49:41 -- setup/common.sh@33 -- # return 0 00:02:26.499 16:49:41 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:26.499 16:49:41 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:26.499 16:49:41 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:26.499 16:49:41 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:26.499 16:49:41 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:26.499 16:49:41 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:26.499 16:49:41 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:26.499 16:49:41 -- setup/hugepages.sh@207 -- # get_nodes 00:02:26.499 16:49:41 -- setup/hugepages.sh@27 -- # local node 00:02:26.499 16:49:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:26.499 16:49:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:26.499 16:49:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:26.499 16:49:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:26.499 16:49:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:26.499 16:49:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:26.499 16:49:41 -- setup/hugepages.sh@208 -- # clear_hp 00:02:26.499 16:49:41 -- setup/hugepages.sh@37 -- # local node hp 00:02:26.499 16:49:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:26.499 16:49:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:26.499 16:49:41 -- setup/hugepages.sh@41 -- # echo 0 00:02:26.499 16:49:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:26.499 16:49:41 -- setup/hugepages.sh@41 -- # echo 0 00:02:26.499 16:49:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:26.499 16:49:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:26.499 16:49:41 -- setup/hugepages.sh@41 -- # echo 0 00:02:26.499 16:49:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:26.499 16:49:41 -- setup/hugepages.sh@41 -- # echo 0 00:02:26.499 16:49:41 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:26.499 16:49:41 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:26.499 16:49:41 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:26.499 16:49:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:26.499 16:49:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:26.499 16:49:41 -- common/autotest_common.sh@10 -- # set +x 00:02:26.499 ************************************ 00:02:26.499 START TEST default_setup 00:02:26.499 ************************************ 00:02:26.499 16:49:42 -- common/autotest_common.sh@1111 -- # default_setup 00:02:26.499 16:49:42 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:26.499 16:49:42 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:26.499 16:49:42 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:26.499 16:49:42 -- setup/hugepages.sh@51 -- # shift 00:02:26.499 16:49:42 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:26.499 16:49:42 -- setup/hugepages.sh@52 -- # local node_ids 00:02:26.499 16:49:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:26.499 16:49:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:26.499 16:49:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:26.499 16:49:42 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:26.499 16:49:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:26.499 16:49:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:26.499 16:49:42 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:26.499 16:49:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:26.499 16:49:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:26.499 16:49:42 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:26.499 16:49:42 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:26.499 16:49:42 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:26.499 16:49:42 -- setup/hugepages.sh@73 -- # return 0 00:02:26.499 16:49:42 -- setup/hugepages.sh@137 -- # setup output 00:02:26.499 16:49:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:26.499 16:49:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:27.879 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:27.879 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:27.879 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:27.879 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:27.879 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:27.879 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:27.879 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:27.879 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:27.879 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:27.879 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:27.879 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:27.879 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:27.879 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:27.879 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:27.879 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:27.879 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:28.821 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:28.821 16:49:44 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:28.821 16:49:44 -- setup/hugepages.sh@89 -- # local node 00:02:28.821 16:49:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:28.821 16:49:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:28.821 16:49:44 -- setup/hugepages.sh@92 -- # local surp 00:02:28.821 16:49:44 -- setup/hugepages.sh@93 -- # local resv 00:02:28.821 16:49:44 -- setup/hugepages.sh@94 -- # local anon 00:02:28.821 16:49:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:28.821 16:49:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:28.821 16:49:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:28.821 16:49:44 -- setup/common.sh@18 -- # local node= 00:02:28.821 16:49:44 -- setup/common.sh@19 -- # local var val 00:02:28.821 16:49:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.821 16:49:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.821 16:49:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.821 16:49:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.821 16:49:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.821 16:49:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.821 16:49:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40350424 kB' 'MemAvailable: 44026976 kB' 'Buffers: 2696 kB' 'Cached: 15710544 kB' 'SwapCached: 0 kB' 'Active: 12670236 kB' 'Inactive: 3488624 kB' 'Active(anon): 12085520 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448940 kB' 'Mapped: 188576 kB' 'Shmem: 11639900 kB' 'KReclaimable: 194100 kB' 'Slab: 554284 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360184 kB' 'KernelStack: 12880 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13239420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195972 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.821 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.821 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.822 16:49:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.822 16:49:44 -- setup/common.sh@33 -- # echo 0 00:02:28.822 16:49:44 -- setup/common.sh@33 -- # return 0 00:02:28.822 16:49:44 -- setup/hugepages.sh@97 -- # anon=0 00:02:28.822 16:49:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:28.822 16:49:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:28.822 16:49:44 -- setup/common.sh@18 -- # local node= 00:02:28.822 16:49:44 -- setup/common.sh@19 -- # local var val 00:02:28.822 16:49:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.822 16:49:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.822 16:49:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.822 16:49:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.822 16:49:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.822 16:49:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.822 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40352540 kB' 'MemAvailable: 44029092 kB' 'Buffers: 2696 kB' 'Cached: 15710544 kB' 'SwapCached: 0 kB' 'Active: 12669424 kB' 'Inactive: 3488624 kB' 'Active(anon): 12084708 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448284 kB' 'Mapped: 188576 kB' 'Shmem: 11639900 kB' 'KReclaimable: 194100 kB' 'Slab: 554332 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360232 kB' 'KernelStack: 12928 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13239432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195972 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.823 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.823 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.824 16:49:44 -- setup/common.sh@33 -- # echo 0 00:02:28.824 16:49:44 -- setup/common.sh@33 -- # return 0 00:02:28.824 16:49:44 -- setup/hugepages.sh@99 -- # surp=0 00:02:28.824 16:49:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:28.824 16:49:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:28.824 16:49:44 -- setup/common.sh@18 -- # local node= 00:02:28.824 16:49:44 -- setup/common.sh@19 -- # local var val 00:02:28.824 16:49:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.824 16:49:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.824 16:49:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.824 16:49:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.824 16:49:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.824 16:49:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40352264 kB' 'MemAvailable: 44028816 kB' 'Buffers: 2696 kB' 'Cached: 15710556 kB' 'SwapCached: 0 kB' 'Active: 12669096 kB' 'Inactive: 3488624 kB' 'Active(anon): 12084380 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447912 kB' 'Mapped: 188500 kB' 'Shmem: 11639912 kB' 'KReclaimable: 194100 kB' 'Slab: 554372 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360272 kB' 'KernelStack: 12816 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13239448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195956 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.824 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.824 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.825 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.825 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.826 16:49:44 -- setup/common.sh@33 -- # echo 0 00:02:28.826 16:49:44 -- setup/common.sh@33 -- # return 0 00:02:28.826 16:49:44 -- setup/hugepages.sh@100 -- # resv=0 00:02:28.826 16:49:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:28.826 nr_hugepages=1024 00:02:28.826 16:49:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:28.826 resv_hugepages=0 00:02:28.826 16:49:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:28.826 surplus_hugepages=0 00:02:28.826 16:49:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:28.826 anon_hugepages=0 00:02:28.826 16:49:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:28.826 16:49:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:28.826 16:49:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:28.826 16:49:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:28.826 16:49:44 -- setup/common.sh@18 -- # local node= 00:02:28.826 16:49:44 -- setup/common.sh@19 -- # local var val 00:02:28.826 16:49:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.826 16:49:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.826 16:49:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.826 16:49:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.826 16:49:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.826 16:49:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40352016 kB' 'MemAvailable: 44028568 kB' 'Buffers: 2696 kB' 'Cached: 15710572 kB' 'SwapCached: 0 kB' 'Active: 12669368 kB' 'Inactive: 3488624 kB' 'Active(anon): 12084652 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448144 kB' 'Mapped: 188424 kB' 'Shmem: 11639928 kB' 'KReclaimable: 194100 kB' 'Slab: 554356 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360256 kB' 'KernelStack: 12784 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13239460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195956 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.826 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.826 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.827 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.827 16:49:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.827 16:49:44 -- setup/common.sh@33 -- # echo 1024 00:02:28.827 16:49:44 -- setup/common.sh@33 -- # return 0 00:02:28.827 16:49:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:28.827 16:49:44 -- setup/hugepages.sh@112 -- # get_nodes 00:02:28.827 16:49:44 -- setup/hugepages.sh@27 -- # local node 00:02:28.827 16:49:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:28.827 16:49:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:28.827 16:49:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:28.827 16:49:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:28.827 16:49:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:28.827 16:49:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:28.827 16:49:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:28.827 16:49:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:28.827 16:49:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:28.827 16:49:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:28.828 16:49:44 -- setup/common.sh@18 -- # local node=0 00:02:28.828 16:49:44 -- setup/common.sh@19 -- # local var val 00:02:28.828 16:49:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.828 16:49:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.828 16:49:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:28.828 16:49:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:28.828 16:49:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.828 16:49:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19669768 kB' 'MemUsed: 13207172 kB' 'SwapCached: 0 kB' 'Active: 6474796 kB' 'Inactive: 3327648 kB' 'Active(anon): 6242320 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9688992 kB' 'Mapped: 102744 kB' 'AnonPages: 116664 kB' 'Shmem: 6128868 kB' 'KernelStack: 6984 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96632 kB' 'Slab: 313992 kB' 'SReclaimable: 96632 kB' 'SUnreclaim: 217360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.828 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.828 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.829 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.829 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.829 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.829 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.829 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.829 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.829 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.829 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.829 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.829 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # continue 00:02:28.829 16:49:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.829 16:49:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.829 16:49:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.829 16:49:44 -- setup/common.sh@33 -- # echo 0 00:02:28.829 16:49:44 -- setup/common.sh@33 -- # return 0 00:02:28.829 16:49:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:28.829 16:49:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:28.829 16:49:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:28.829 16:49:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:28.829 16:49:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:28.829 node0=1024 expecting 1024 00:02:28.829 16:49:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:28.829 00:02:28.829 real 0m2.395s 00:02:28.829 user 0m0.588s 00:02:28.829 sys 0m0.822s 00:02:28.829 16:49:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:28.829 16:49:44 -- common/autotest_common.sh@10 -- # set +x 00:02:28.829 ************************************ 00:02:28.829 END TEST default_setup 00:02:28.829 ************************************ 00:02:28.829 16:49:44 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:28.829 16:49:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:28.829 16:49:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:28.829 16:49:44 -- common/autotest_common.sh@10 -- # set +x 00:02:29.089 ************************************ 00:02:29.089 START TEST per_node_1G_alloc 00:02:29.089 ************************************ 00:02:29.089 16:49:44 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:29.089 16:49:44 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:29.089 16:49:44 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:29.089 16:49:44 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:29.089 16:49:44 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:29.089 16:49:44 -- setup/hugepages.sh@51 -- # shift 00:02:29.089 16:49:44 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:29.089 16:49:44 -- setup/hugepages.sh@52 -- # local node_ids 00:02:29.089 16:49:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:29.089 16:49:44 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:29.089 16:49:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:29.089 16:49:44 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:29.089 16:49:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:29.089 16:49:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:29.089 16:49:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:29.089 16:49:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:29.089 16:49:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:29.089 16:49:44 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:29.089 16:49:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:29.089 16:49:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:29.089 16:49:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:29.089 16:49:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:29.089 16:49:44 -- setup/hugepages.sh@73 -- # return 0 00:02:29.089 16:49:44 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:29.089 16:49:44 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:29.089 16:49:44 -- setup/hugepages.sh@146 -- # setup output 00:02:29.089 16:49:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.089 16:49:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:30.025 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:30.025 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:30.026 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:30.026 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:30.026 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:30.026 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:30.026 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:30.026 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:30.026 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:30.026 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:30.026 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:30.026 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:30.026 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:30.026 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:30.026 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:30.026 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:30.026 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:30.291 16:49:45 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:30.291 16:49:45 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:30.291 16:49:45 -- setup/hugepages.sh@89 -- # local node 00:02:30.291 16:49:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:30.291 16:49:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:30.291 16:49:45 -- setup/hugepages.sh@92 -- # local surp 00:02:30.291 16:49:45 -- setup/hugepages.sh@93 -- # local resv 00:02:30.291 16:49:45 -- setup/hugepages.sh@94 -- # local anon 00:02:30.291 16:49:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:30.291 16:49:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:30.291 16:49:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:30.291 16:49:45 -- setup/common.sh@18 -- # local node= 00:02:30.291 16:49:45 -- setup/common.sh@19 -- # local var val 00:02:30.291 16:49:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.291 16:49:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.291 16:49:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.291 16:49:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.291 16:49:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.291 16:49:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40327300 kB' 'MemAvailable: 44003852 kB' 'Buffers: 2696 kB' 'Cached: 15710628 kB' 'SwapCached: 0 kB' 'Active: 12670032 kB' 'Inactive: 3488624 kB' 'Active(anon): 12085316 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448572 kB' 'Mapped: 188532 kB' 'Shmem: 11639984 kB' 'KReclaimable: 194100 kB' 'Slab: 554360 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360260 kB' 'KernelStack: 12784 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13239636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196228 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.291 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.291 16:49:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.292 16:49:45 -- setup/common.sh@33 -- # echo 0 00:02:30.292 16:49:45 -- setup/common.sh@33 -- # return 0 00:02:30.292 16:49:45 -- setup/hugepages.sh@97 -- # anon=0 00:02:30.292 16:49:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:30.292 16:49:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:30.292 16:49:45 -- setup/common.sh@18 -- # local node= 00:02:30.292 16:49:45 -- setup/common.sh@19 -- # local var val 00:02:30.292 16:49:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.292 16:49:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.292 16:49:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.292 16:49:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.292 16:49:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.292 16:49:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40327628 kB' 'MemAvailable: 44004180 kB' 'Buffers: 2696 kB' 'Cached: 15710628 kB' 'SwapCached: 0 kB' 'Active: 12669876 kB' 'Inactive: 3488624 kB' 'Active(anon): 12085160 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448372 kB' 'Mapped: 188524 kB' 'Shmem: 11639984 kB' 'KReclaimable: 194100 kB' 'Slab: 554344 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360244 kB' 'KernelStack: 12800 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13239648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.292 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.292 16:49:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.293 16:49:45 -- setup/common.sh@33 -- # echo 0 00:02:30.293 16:49:45 -- setup/common.sh@33 -- # return 0 00:02:30.293 16:49:45 -- setup/hugepages.sh@99 -- # surp=0 00:02:30.293 16:49:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:30.293 16:49:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:30.293 16:49:45 -- setup/common.sh@18 -- # local node= 00:02:30.293 16:49:45 -- setup/common.sh@19 -- # local var val 00:02:30.293 16:49:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.293 16:49:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.293 16:49:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.293 16:49:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.293 16:49:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.293 16:49:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.293 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.293 16:49:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40328668 kB' 'MemAvailable: 44005220 kB' 'Buffers: 2696 kB' 'Cached: 15710636 kB' 'SwapCached: 0 kB' 'Active: 12669468 kB' 'Inactive: 3488624 kB' 'Active(anon): 12084752 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447944 kB' 'Mapped: 188448 kB' 'Shmem: 11639992 kB' 'KReclaimable: 194100 kB' 'Slab: 554360 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360260 kB' 'KernelStack: 12816 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13239660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.293 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.294 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.294 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.295 16:49:45 -- setup/common.sh@33 -- # echo 0 00:02:30.295 16:49:45 -- setup/common.sh@33 -- # return 0 00:02:30.295 16:49:45 -- setup/hugepages.sh@100 -- # resv=0 00:02:30.295 16:49:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:30.295 nr_hugepages=1024 00:02:30.295 16:49:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:30.295 resv_hugepages=0 00:02:30.295 16:49:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:30.295 surplus_hugepages=0 00:02:30.295 16:49:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:30.295 anon_hugepages=0 00:02:30.295 16:49:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:30.295 16:49:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:30.295 16:49:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:30.295 16:49:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:30.295 16:49:45 -- setup/common.sh@18 -- # local node= 00:02:30.295 16:49:45 -- setup/common.sh@19 -- # local var val 00:02:30.295 16:49:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.295 16:49:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.295 16:49:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.295 16:49:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.295 16:49:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.295 16:49:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40328416 kB' 'MemAvailable: 44004968 kB' 'Buffers: 2696 kB' 'Cached: 15710660 kB' 'SwapCached: 0 kB' 'Active: 12669792 kB' 'Inactive: 3488624 kB' 'Active(anon): 12085076 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448244 kB' 'Mapped: 188448 kB' 'Shmem: 11640016 kB' 'KReclaimable: 194100 kB' 'Slab: 554360 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360260 kB' 'KernelStack: 12816 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13239676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.295 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.295 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.296 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.296 16:49:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.296 16:49:45 -- setup/common.sh@33 -- # echo 1024 00:02:30.296 16:49:45 -- setup/common.sh@33 -- # return 0 00:02:30.296 16:49:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:30.296 16:49:45 -- setup/hugepages.sh@112 -- # get_nodes 00:02:30.296 16:49:45 -- setup/hugepages.sh@27 -- # local node 00:02:30.296 16:49:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:30.296 16:49:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:30.296 16:49:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:30.296 16:49:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:30.296 16:49:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:30.296 16:49:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:30.296 16:49:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:30.296 16:49:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:30.296 16:49:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:30.296 16:49:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:30.296 16:49:45 -- setup/common.sh@18 -- # local node=0 00:02:30.296 16:49:45 -- setup/common.sh@19 -- # local var val 00:02:30.297 16:49:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.297 16:49:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.297 16:49:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:30.297 16:49:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:30.297 16:49:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.297 16:49:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20709768 kB' 'MemUsed: 12167172 kB' 'SwapCached: 0 kB' 'Active: 6474544 kB' 'Inactive: 3327648 kB' 'Active(anon): 6242068 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9689084 kB' 'Mapped: 102768 kB' 'AnonPages: 116212 kB' 'Shmem: 6128960 kB' 'KernelStack: 6968 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96632 kB' 'Slab: 313888 kB' 'SReclaimable: 96632 kB' 'SUnreclaim: 217256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.297 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.297 16:49:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@33 -- # echo 0 00:02:30.298 16:49:45 -- setup/common.sh@33 -- # return 0 00:02:30.298 16:49:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:30.298 16:49:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:30.298 16:49:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:30.298 16:49:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:30.298 16:49:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:30.298 16:49:45 -- setup/common.sh@18 -- # local node=1 00:02:30.298 16:49:45 -- setup/common.sh@19 -- # local var val 00:02:30.298 16:49:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.298 16:49:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.298 16:49:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:30.298 16:49:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:30.298 16:49:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.298 16:49:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664776 kB' 'MemFree: 19618144 kB' 'MemUsed: 8046632 kB' 'SwapCached: 0 kB' 'Active: 6194852 kB' 'Inactive: 160976 kB' 'Active(anon): 5842612 kB' 'Inactive(anon): 0 kB' 'Active(file): 352240 kB' 'Inactive(file): 160976 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6024296 kB' 'Mapped: 85680 kB' 'AnonPages: 331608 kB' 'Shmem: 5511080 kB' 'KernelStack: 5816 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97468 kB' 'Slab: 240472 kB' 'SReclaimable: 97468 kB' 'SUnreclaim: 143004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.298 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.298 16:49:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.557 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.557 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.557 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.557 16:49:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.557 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.557 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.557 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.557 16:49:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.557 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.557 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.557 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.557 16:49:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.557 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.557 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.557 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.557 16:49:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.557 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.557 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.557 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.557 16:49:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.557 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.557 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.557 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.557 16:49:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:45 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:46 -- setup/common.sh@32 -- # continue 00:02:30.558 16:49:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.558 16:49:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.558 16:49:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.558 16:49:46 -- setup/common.sh@33 -- # echo 0 00:02:30.558 16:49:46 -- setup/common.sh@33 -- # return 0 00:02:30.558 16:49:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:30.558 16:49:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:30.558 16:49:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:30.558 16:49:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:30.558 16:49:46 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:30.558 node0=512 expecting 512 00:02:30.558 16:49:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:30.558 16:49:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:30.558 16:49:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:30.558 16:49:46 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:30.558 node1=512 expecting 512 00:02:30.558 16:49:46 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:30.558 00:02:30.558 real 0m1.408s 00:02:30.558 user 0m0.594s 00:02:30.558 sys 0m0.775s 00:02:30.558 16:49:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:30.558 16:49:46 -- common/autotest_common.sh@10 -- # set +x 00:02:30.558 ************************************ 00:02:30.558 END TEST per_node_1G_alloc 00:02:30.558 ************************************ 00:02:30.558 16:49:46 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:30.558 16:49:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:30.558 16:49:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:30.558 16:49:46 -- common/autotest_common.sh@10 -- # set +x 00:02:30.558 ************************************ 00:02:30.558 START TEST even_2G_alloc 00:02:30.558 ************************************ 00:02:30.558 16:49:46 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:30.558 16:49:46 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:30.558 16:49:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:30.558 16:49:46 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:30.558 16:49:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:30.558 16:49:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:30.558 16:49:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:30.558 16:49:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:30.558 16:49:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:30.558 16:49:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:30.558 16:49:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:30.558 16:49:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:30.558 16:49:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:30.558 16:49:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:30.558 16:49:46 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:30.558 16:49:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:30.558 16:49:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:30.558 16:49:46 -- setup/hugepages.sh@83 -- # : 512 00:02:30.558 16:49:46 -- setup/hugepages.sh@84 -- # : 1 00:02:30.558 16:49:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:30.558 16:49:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:30.558 16:49:46 -- setup/hugepages.sh@83 -- # : 0 00:02:30.558 16:49:46 -- setup/hugepages.sh@84 -- # : 0 00:02:30.558 16:49:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:30.558 16:49:46 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:30.558 16:49:46 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:30.558 16:49:46 -- setup/hugepages.sh@153 -- # setup output 00:02:30.558 16:49:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.558 16:49:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:31.495 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:31.495 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:31.495 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:31.495 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:31.495 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:31.757 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:31.757 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:31.757 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:31.757 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:31.757 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:31.757 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:31.757 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:31.757 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:31.757 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:31.757 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:31.757 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:31.757 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:31.757 16:49:47 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:31.757 16:49:47 -- setup/hugepages.sh@89 -- # local node 00:02:31.757 16:49:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:31.757 16:49:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:31.757 16:49:47 -- setup/hugepages.sh@92 -- # local surp 00:02:31.757 16:49:47 -- setup/hugepages.sh@93 -- # local resv 00:02:31.757 16:49:47 -- setup/hugepages.sh@94 -- # local anon 00:02:31.758 16:49:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:31.758 16:49:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:31.758 16:49:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:31.758 16:49:47 -- setup/common.sh@18 -- # local node= 00:02:31.758 16:49:47 -- setup/common.sh@19 -- # local var val 00:02:31.758 16:49:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.758 16:49:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.758 16:49:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.758 16:49:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.758 16:49:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.758 16:49:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40368244 kB' 'MemAvailable: 44044796 kB' 'Buffers: 2696 kB' 'Cached: 15710732 kB' 'SwapCached: 0 kB' 'Active: 12675404 kB' 'Inactive: 3488624 kB' 'Active(anon): 12090688 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453920 kB' 'Mapped: 189076 kB' 'Shmem: 11640088 kB' 'KReclaimable: 194100 kB' 'Slab: 554288 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360188 kB' 'KernelStack: 12752 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13245980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196152 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.758 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.758 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.759 16:49:47 -- setup/common.sh@33 -- # echo 0 00:02:31.759 16:49:47 -- setup/common.sh@33 -- # return 0 00:02:31.759 16:49:47 -- setup/hugepages.sh@97 -- # anon=0 00:02:31.759 16:49:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:31.759 16:49:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:31.759 16:49:47 -- setup/common.sh@18 -- # local node= 00:02:31.759 16:49:47 -- setup/common.sh@19 -- # local var val 00:02:31.759 16:49:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.759 16:49:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.759 16:49:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.759 16:49:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.759 16:49:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.759 16:49:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40371584 kB' 'MemAvailable: 44048136 kB' 'Buffers: 2696 kB' 'Cached: 15710732 kB' 'SwapCached: 0 kB' 'Active: 12671244 kB' 'Inactive: 3488624 kB' 'Active(anon): 12086528 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 449784 kB' 'Mapped: 189380 kB' 'Shmem: 11640088 kB' 'KReclaimable: 194100 kB' 'Slab: 554288 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360188 kB' 'KernelStack: 12848 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13241228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196148 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.759 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.759 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.760 16:49:47 -- setup/common.sh@33 -- # echo 0 00:02:31.760 16:49:47 -- setup/common.sh@33 -- # return 0 00:02:31.760 16:49:47 -- setup/hugepages.sh@99 -- # surp=0 00:02:31.760 16:49:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:31.760 16:49:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:31.760 16:49:47 -- setup/common.sh@18 -- # local node= 00:02:31.760 16:49:47 -- setup/common.sh@19 -- # local var val 00:02:31.760 16:49:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.760 16:49:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.760 16:49:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.760 16:49:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.760 16:49:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.760 16:49:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40371756 kB' 'MemAvailable: 44048308 kB' 'Buffers: 2696 kB' 'Cached: 15710732 kB' 'SwapCached: 0 kB' 'Active: 12673480 kB' 'Inactive: 3488624 kB' 'Active(anon): 12088764 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451976 kB' 'Mapped: 188916 kB' 'Shmem: 11640088 kB' 'KReclaimable: 194100 kB' 'Slab: 554360 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360260 kB' 'KernelStack: 12864 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13244408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196116 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.760 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.760 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.761 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.761 16:49:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.762 16:49:47 -- setup/common.sh@33 -- # echo 0 00:02:31.762 16:49:47 -- setup/common.sh@33 -- # return 0 00:02:31.762 16:49:47 -- setup/hugepages.sh@100 -- # resv=0 00:02:31.762 16:49:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:31.762 nr_hugepages=1024 00:02:31.762 16:49:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:31.762 resv_hugepages=0 00:02:31.762 16:49:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:31.762 surplus_hugepages=0 00:02:31.762 16:49:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:31.762 anon_hugepages=0 00:02:31.762 16:49:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:31.762 16:49:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:31.762 16:49:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:31.762 16:49:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:31.762 16:49:47 -- setup/common.sh@18 -- # local node= 00:02:31.762 16:49:47 -- setup/common.sh@19 -- # local var val 00:02:31.762 16:49:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.762 16:49:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.762 16:49:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.762 16:49:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.762 16:49:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.762 16:49:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40369488 kB' 'MemAvailable: 44046040 kB' 'Buffers: 2696 kB' 'Cached: 15710760 kB' 'SwapCached: 0 kB' 'Active: 12675528 kB' 'Inactive: 3488624 kB' 'Active(anon): 12090812 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454028 kB' 'Mapped: 189396 kB' 'Shmem: 11640116 kB' 'KReclaimable: 194100 kB' 'Slab: 554340 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360240 kB' 'KernelStack: 12864 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13246020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196120 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.762 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.762 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # continue 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.763 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.763 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.763 16:49:47 -- setup/common.sh@33 -- # echo 1024 00:02:31.763 16:49:47 -- setup/common.sh@33 -- # return 0 00:02:31.763 16:49:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:31.763 16:49:47 -- setup/hugepages.sh@112 -- # get_nodes 00:02:31.763 16:49:47 -- setup/hugepages.sh@27 -- # local node 00:02:31.763 16:49:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:31.763 16:49:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:31.763 16:49:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:31.763 16:49:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:31.763 16:49:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:31.763 16:49:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:31.763 16:49:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:31.763 16:49:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:31.763 16:49:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:31.763 16:49:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:31.763 16:49:47 -- setup/common.sh@18 -- # local node=0 00:02:31.763 16:49:47 -- setup/common.sh@19 -- # local var val 00:02:31.763 16:49:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.763 16:49:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.763 16:49:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:31.763 16:49:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:31.763 16:49:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.023 16:49:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.023 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.023 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20745692 kB' 'MemUsed: 12131248 kB' 'SwapCached: 0 kB' 'Active: 6475536 kB' 'Inactive: 3327648 kB' 'Active(anon): 6243060 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9689164 kB' 'Mapped: 102956 kB' 'AnonPages: 117228 kB' 'Shmem: 6129040 kB' 'KernelStack: 7048 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96632 kB' 'Slab: 313928 kB' 'SReclaimable: 96632 kB' 'SUnreclaim: 217296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.024 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.024 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@33 -- # echo 0 00:02:32.025 16:49:47 -- setup/common.sh@33 -- # return 0 00:02:32.025 16:49:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:32.025 16:49:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:32.025 16:49:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:32.025 16:49:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:32.025 16:49:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:32.025 16:49:47 -- setup/common.sh@18 -- # local node=1 00:02:32.025 16:49:47 -- setup/common.sh@19 -- # local var val 00:02:32.025 16:49:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:32.025 16:49:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.025 16:49:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:32.025 16:49:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:32.025 16:49:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.025 16:49:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664776 kB' 'MemFree: 19624048 kB' 'MemUsed: 8040728 kB' 'SwapCached: 0 kB' 'Active: 6194364 kB' 'Inactive: 160976 kB' 'Active(anon): 5842124 kB' 'Inactive(anon): 0 kB' 'Active(file): 352240 kB' 'Inactive(file): 160976 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6024308 kB' 'Mapped: 85680 kB' 'AnonPages: 331128 kB' 'Shmem: 5511092 kB' 'KernelStack: 5800 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97468 kB' 'Slab: 240412 kB' 'SReclaimable: 97468 kB' 'SUnreclaim: 142944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.025 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.025 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.026 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.026 16:49:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.026 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.026 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.026 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.026 16:49:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.026 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.026 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.026 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.026 16:49:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.026 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.026 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.026 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.026 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.026 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.026 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.026 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.026 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.026 16:49:47 -- setup/common.sh@32 -- # continue 00:02:32.026 16:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.026 16:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.026 16:49:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.026 16:49:47 -- setup/common.sh@33 -- # echo 0 00:02:32.026 16:49:47 -- setup/common.sh@33 -- # return 0 00:02:32.026 16:49:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:32.026 16:49:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:32.026 16:49:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:32.026 16:49:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:32.026 16:49:47 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:32.026 node0=512 expecting 512 00:02:32.026 16:49:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:32.026 16:49:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:32.026 16:49:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:32.026 16:49:47 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:32.026 node1=512 expecting 512 00:02:32.026 16:49:47 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:32.026 00:02:32.026 real 0m1.382s 00:02:32.026 user 0m0.567s 00:02:32.026 sys 0m0.773s 00:02:32.026 16:49:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:32.026 16:49:47 -- common/autotest_common.sh@10 -- # set +x 00:02:32.026 ************************************ 00:02:32.026 END TEST even_2G_alloc 00:02:32.026 ************************************ 00:02:32.026 16:49:47 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:32.026 16:49:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:32.026 16:49:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:32.026 16:49:47 -- common/autotest_common.sh@10 -- # set +x 00:02:32.026 ************************************ 00:02:32.026 START TEST odd_alloc 00:02:32.026 ************************************ 00:02:32.026 16:49:47 -- common/autotest_common.sh@1111 -- # odd_alloc 00:02:32.026 16:49:47 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:32.026 16:49:47 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:32.026 16:49:47 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:32.026 16:49:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:32.026 16:49:47 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:32.026 16:49:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:32.026 16:49:47 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:32.026 16:49:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:32.026 16:49:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:32.026 16:49:47 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:32.026 16:49:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:32.026 16:49:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:32.026 16:49:47 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:32.026 16:49:47 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:32.026 16:49:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:32.026 16:49:47 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:32.026 16:49:47 -- setup/hugepages.sh@83 -- # : 513 00:02:32.026 16:49:47 -- setup/hugepages.sh@84 -- # : 1 00:02:32.026 16:49:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:32.026 16:49:47 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:32.026 16:49:47 -- setup/hugepages.sh@83 -- # : 0 00:02:32.026 16:49:47 -- setup/hugepages.sh@84 -- # : 0 00:02:32.026 16:49:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:32.026 16:49:47 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:32.026 16:49:47 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:32.026 16:49:47 -- setup/hugepages.sh@160 -- # setup output 00:02:32.026 16:49:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.026 16:49:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:32.964 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:32.964 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:32.964 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:32.964 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:32.964 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:32.964 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:32.964 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:32.964 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:32.964 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:32.964 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:32.964 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:32.964 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:32.964 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:32.964 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:32.964 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:32.964 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:32.964 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:33.229 16:49:48 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:33.229 16:49:48 -- setup/hugepages.sh@89 -- # local node 00:02:33.229 16:49:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:33.229 16:49:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:33.229 16:49:48 -- setup/hugepages.sh@92 -- # local surp 00:02:33.229 16:49:48 -- setup/hugepages.sh@93 -- # local resv 00:02:33.229 16:49:48 -- setup/hugepages.sh@94 -- # local anon 00:02:33.229 16:49:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:33.229 16:49:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:33.229 16:49:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:33.229 16:49:48 -- setup/common.sh@18 -- # local node= 00:02:33.229 16:49:48 -- setup/common.sh@19 -- # local var val 00:02:33.229 16:49:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.229 16:49:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.229 16:49:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.229 16:49:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.229 16:49:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.229 16:49:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40377760 kB' 'MemAvailable: 44054312 kB' 'Buffers: 2696 kB' 'Cached: 15710824 kB' 'SwapCached: 0 kB' 'Active: 12663936 kB' 'Inactive: 3488624 kB' 'Active(anon): 12079220 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 442208 kB' 'Mapped: 187456 kB' 'Shmem: 11640180 kB' 'KReclaimable: 194100 kB' 'Slab: 554120 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360020 kB' 'KernelStack: 12736 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 13210536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.229 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.229 16:49:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.230 16:49:48 -- setup/common.sh@33 -- # echo 0 00:02:33.230 16:49:48 -- setup/common.sh@33 -- # return 0 00:02:33.230 16:49:48 -- setup/hugepages.sh@97 -- # anon=0 00:02:33.230 16:49:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:33.230 16:49:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:33.230 16:49:48 -- setup/common.sh@18 -- # local node= 00:02:33.230 16:49:48 -- setup/common.sh@19 -- # local var val 00:02:33.230 16:49:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.230 16:49:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.230 16:49:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.230 16:49:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.230 16:49:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.230 16:49:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40377760 kB' 'MemAvailable: 44054312 kB' 'Buffers: 2696 kB' 'Cached: 15710824 kB' 'SwapCached: 0 kB' 'Active: 12664808 kB' 'Inactive: 3488624 kB' 'Active(anon): 12080092 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443140 kB' 'Mapped: 187456 kB' 'Shmem: 11640180 kB' 'KReclaimable: 194100 kB' 'Slab: 554120 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360020 kB' 'KernelStack: 12864 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 13209388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196004 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.230 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.230 16:49:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.231 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.231 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.232 16:49:48 -- setup/common.sh@33 -- # echo 0 00:02:33.232 16:49:48 -- setup/common.sh@33 -- # return 0 00:02:33.232 16:49:48 -- setup/hugepages.sh@99 -- # surp=0 00:02:33.232 16:49:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:33.232 16:49:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:33.232 16:49:48 -- setup/common.sh@18 -- # local node= 00:02:33.232 16:49:48 -- setup/common.sh@19 -- # local var val 00:02:33.232 16:49:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.232 16:49:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.232 16:49:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.232 16:49:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.232 16:49:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.232 16:49:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40376480 kB' 'MemAvailable: 44053032 kB' 'Buffers: 2696 kB' 'Cached: 15710828 kB' 'SwapCached: 0 kB' 'Active: 12663936 kB' 'Inactive: 3488624 kB' 'Active(anon): 12079220 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 442696 kB' 'Mapped: 187448 kB' 'Shmem: 11640184 kB' 'KReclaimable: 194100 kB' 'Slab: 554112 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360012 kB' 'KernelStack: 13072 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 13209404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.232 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.232 16:49:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.233 16:49:48 -- setup/common.sh@33 -- # echo 0 00:02:33.233 16:49:48 -- setup/common.sh@33 -- # return 0 00:02:33.233 16:49:48 -- setup/hugepages.sh@100 -- # resv=0 00:02:33.233 16:49:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:33.233 nr_hugepages=1025 00:02:33.233 16:49:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:33.233 resv_hugepages=0 00:02:33.233 16:49:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:33.233 surplus_hugepages=0 00:02:33.233 16:49:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:33.233 anon_hugepages=0 00:02:33.233 16:49:48 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:33.233 16:49:48 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:33.233 16:49:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:33.233 16:49:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:33.233 16:49:48 -- setup/common.sh@18 -- # local node= 00:02:33.233 16:49:48 -- setup/common.sh@19 -- # local var val 00:02:33.233 16:49:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.233 16:49:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.233 16:49:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.233 16:49:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.233 16:49:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.233 16:49:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40374516 kB' 'MemAvailable: 44051068 kB' 'Buffers: 2696 kB' 'Cached: 15710840 kB' 'SwapCached: 0 kB' 'Active: 12664668 kB' 'Inactive: 3488624 kB' 'Active(anon): 12079952 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 442940 kB' 'Mapped: 187448 kB' 'Shmem: 11640196 kB' 'KReclaimable: 194100 kB' 'Slab: 554104 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360004 kB' 'KernelStack: 13168 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 13210792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196164 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.233 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.233 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.234 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.234 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.235 16:49:48 -- setup/common.sh@33 -- # echo 1025 00:02:33.235 16:49:48 -- setup/common.sh@33 -- # return 0 00:02:33.235 16:49:48 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:33.235 16:49:48 -- setup/hugepages.sh@112 -- # get_nodes 00:02:33.235 16:49:48 -- setup/hugepages.sh@27 -- # local node 00:02:33.235 16:49:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.235 16:49:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:33.235 16:49:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.235 16:49:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:33.235 16:49:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:33.235 16:49:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:33.235 16:49:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:33.235 16:49:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:33.235 16:49:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:33.235 16:49:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:33.235 16:49:48 -- setup/common.sh@18 -- # local node=0 00:02:33.235 16:49:48 -- setup/common.sh@19 -- # local var val 00:02:33.235 16:49:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.235 16:49:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.235 16:49:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:33.235 16:49:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:33.235 16:49:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.235 16:49:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20733960 kB' 'MemUsed: 12142980 kB' 'SwapCached: 0 kB' 'Active: 6473632 kB' 'Inactive: 3327648 kB' 'Active(anon): 6241156 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9689208 kB' 'Mapped: 101768 kB' 'AnonPages: 115136 kB' 'Shmem: 6129084 kB' 'KernelStack: 7448 kB' 'PageTables: 5864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96632 kB' 'Slab: 313756 kB' 'SReclaimable: 96632 kB' 'SUnreclaim: 217124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.235 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.235 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@33 -- # echo 0 00:02:33.236 16:49:48 -- setup/common.sh@33 -- # return 0 00:02:33.236 16:49:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:33.236 16:49:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:33.236 16:49:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:33.236 16:49:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:33.236 16:49:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:33.236 16:49:48 -- setup/common.sh@18 -- # local node=1 00:02:33.236 16:49:48 -- setup/common.sh@19 -- # local var val 00:02:33.236 16:49:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.236 16:49:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.236 16:49:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:33.236 16:49:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:33.236 16:49:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.236 16:49:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664776 kB' 'MemFree: 19638084 kB' 'MemUsed: 8026692 kB' 'SwapCached: 0 kB' 'Active: 6192088 kB' 'Inactive: 160976 kB' 'Active(anon): 5839848 kB' 'Inactive(anon): 0 kB' 'Active(file): 352240 kB' 'Inactive(file): 160976 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6024364 kB' 'Mapped: 85680 kB' 'AnonPages: 328772 kB' 'Shmem: 5511148 kB' 'KernelStack: 5768 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97468 kB' 'Slab: 240348 kB' 'SReclaimable: 97468 kB' 'SUnreclaim: 142880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.236 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.236 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # continue 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.237 16:49:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.237 16:49:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.237 16:49:48 -- setup/common.sh@33 -- # echo 0 00:02:33.237 16:49:48 -- setup/common.sh@33 -- # return 0 00:02:33.237 16:49:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:33.237 16:49:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:33.237 16:49:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:33.237 16:49:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:33.237 16:49:48 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:33.237 node0=512 expecting 513 00:02:33.237 16:49:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:33.237 16:49:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:33.237 16:49:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:33.237 16:49:48 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:33.237 node1=513 expecting 512 00:02:33.237 16:49:48 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:33.237 00:02:33.237 real 0m1.313s 00:02:33.237 user 0m0.532s 00:02:33.237 sys 0m0.740s 00:02:33.237 16:49:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:33.237 16:49:48 -- common/autotest_common.sh@10 -- # set +x 00:02:33.237 ************************************ 00:02:33.237 END TEST odd_alloc 00:02:33.237 ************************************ 00:02:33.497 16:49:48 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:33.497 16:49:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:33.497 16:49:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:33.497 16:49:48 -- common/autotest_common.sh@10 -- # set +x 00:02:33.497 ************************************ 00:02:33.497 START TEST custom_alloc 00:02:33.497 ************************************ 00:02:33.497 16:49:49 -- common/autotest_common.sh@1111 -- # custom_alloc 00:02:33.497 16:49:49 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:33.497 16:49:49 -- setup/hugepages.sh@169 -- # local node 00:02:33.497 16:49:49 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:33.497 16:49:49 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:33.497 16:49:49 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:33.497 16:49:49 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:33.497 16:49:49 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:33.497 16:49:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:33.497 16:49:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:33.497 16:49:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:33.497 16:49:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.497 16:49:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:33.497 16:49:49 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.497 16:49:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.497 16:49:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.497 16:49:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:33.497 16:49:49 -- setup/hugepages.sh@83 -- # : 256 00:02:33.497 16:49:49 -- setup/hugepages.sh@84 -- # : 1 00:02:33.497 16:49:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:33.497 16:49:49 -- setup/hugepages.sh@83 -- # : 0 00:02:33.497 16:49:49 -- setup/hugepages.sh@84 -- # : 0 00:02:33.497 16:49:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:33.497 16:49:49 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:33.497 16:49:49 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:33.497 16:49:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:33.497 16:49:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:33.497 16:49:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:33.497 16:49:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.497 16:49:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:33.497 16:49:49 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.497 16:49:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.497 16:49:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.497 16:49:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:33.497 16:49:49 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:33.497 16:49:49 -- setup/hugepages.sh@78 -- # return 0 00:02:33.497 16:49:49 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:33.497 16:49:49 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:33.497 16:49:49 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:33.497 16:49:49 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:33.497 16:49:49 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:33.497 16:49:49 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:33.497 16:49:49 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:33.497 16:49:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:33.497 16:49:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.497 16:49:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:33.497 16:49:49 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.497 16:49:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.498 16:49:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.498 16:49:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:33.498 16:49:49 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:33.498 16:49:49 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:33.498 16:49:49 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:33.498 16:49:49 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:33.498 16:49:49 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:33.498 16:49:49 -- setup/hugepages.sh@78 -- # return 0 00:02:33.498 16:49:49 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:33.498 16:49:49 -- setup/hugepages.sh@187 -- # setup output 00:02:33.498 16:49:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.498 16:49:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:34.436 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:34.436 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:34.436 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:34.436 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:34.436 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:34.436 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:34.436 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:34.436 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:34.436 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:34.436 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:34.436 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:34.436 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:34.436 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:34.436 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:34.436 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:34.436 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:34.436 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:34.699 16:49:50 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:34.699 16:49:50 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:34.699 16:49:50 -- setup/hugepages.sh@89 -- # local node 00:02:34.699 16:49:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:34.699 16:49:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:34.699 16:49:50 -- setup/hugepages.sh@92 -- # local surp 00:02:34.699 16:49:50 -- setup/hugepages.sh@93 -- # local resv 00:02:34.699 16:49:50 -- setup/hugepages.sh@94 -- # local anon 00:02:34.699 16:49:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:34.699 16:49:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:34.699 16:49:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:34.699 16:49:50 -- setup/common.sh@18 -- # local node= 00:02:34.699 16:49:50 -- setup/common.sh@19 -- # local var val 00:02:34.699 16:49:50 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.699 16:49:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.699 16:49:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.699 16:49:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.699 16:49:50 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.699 16:49:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 39333396 kB' 'MemAvailable: 43009948 kB' 'Buffers: 2696 kB' 'Cached: 15710920 kB' 'SwapCached: 0 kB' 'Active: 12664888 kB' 'Inactive: 3488624 kB' 'Active(anon): 12080172 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443208 kB' 'Mapped: 187500 kB' 'Shmem: 11640276 kB' 'KReclaimable: 194100 kB' 'Slab: 554056 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 359956 kB' 'KernelStack: 12800 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 13208592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.699 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.699 16:49:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.700 16:49:50 -- setup/common.sh@33 -- # echo 0 00:02:34.700 16:49:50 -- setup/common.sh@33 -- # return 0 00:02:34.700 16:49:50 -- setup/hugepages.sh@97 -- # anon=0 00:02:34.700 16:49:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:34.700 16:49:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.700 16:49:50 -- setup/common.sh@18 -- # local node= 00:02:34.700 16:49:50 -- setup/common.sh@19 -- # local var val 00:02:34.700 16:49:50 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.700 16:49:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.700 16:49:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.700 16:49:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.700 16:49:50 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.700 16:49:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 39334704 kB' 'MemAvailable: 43011256 kB' 'Buffers: 2696 kB' 'Cached: 15710920 kB' 'SwapCached: 0 kB' 'Active: 12664948 kB' 'Inactive: 3488624 kB' 'Active(anon): 12080232 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443324 kB' 'Mapped: 187576 kB' 'Shmem: 11640276 kB' 'KReclaimable: 194100 kB' 'Slab: 554116 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360016 kB' 'KernelStack: 12752 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 13208604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196020 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.700 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.700 16:49:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.701 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.701 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.702 16:49:50 -- setup/common.sh@33 -- # echo 0 00:02:34.702 16:49:50 -- setup/common.sh@33 -- # return 0 00:02:34.702 16:49:50 -- setup/hugepages.sh@99 -- # surp=0 00:02:34.702 16:49:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:34.702 16:49:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:34.702 16:49:50 -- setup/common.sh@18 -- # local node= 00:02:34.702 16:49:50 -- setup/common.sh@19 -- # local var val 00:02:34.702 16:49:50 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.702 16:49:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.702 16:49:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.702 16:49:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.702 16:49:50 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.702 16:49:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 39335144 kB' 'MemAvailable: 43011696 kB' 'Buffers: 2696 kB' 'Cached: 15710932 kB' 'SwapCached: 0 kB' 'Active: 12664520 kB' 'Inactive: 3488624 kB' 'Active(anon): 12079804 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 442868 kB' 'Mapped: 187476 kB' 'Shmem: 11640288 kB' 'KReclaimable: 194100 kB' 'Slab: 554108 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360008 kB' 'KernelStack: 12768 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 13208620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196020 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.702 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.702 16:49:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.703 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.703 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.703 16:49:50 -- setup/common.sh@33 -- # echo 0 00:02:34.703 16:49:50 -- setup/common.sh@33 -- # return 0 00:02:34.703 16:49:50 -- setup/hugepages.sh@100 -- # resv=0 00:02:34.703 16:49:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:34.704 nr_hugepages=1536 00:02:34.704 16:49:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:34.704 resv_hugepages=0 00:02:34.704 16:49:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:34.704 surplus_hugepages=0 00:02:34.704 16:49:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:34.704 anon_hugepages=0 00:02:34.704 16:49:50 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:34.704 16:49:50 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:34.704 16:49:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:34.704 16:49:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:34.704 16:49:50 -- setup/common.sh@18 -- # local node= 00:02:34.704 16:49:50 -- setup/common.sh@19 -- # local var val 00:02:34.704 16:49:50 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.704 16:49:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.704 16:49:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.704 16:49:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.704 16:49:50 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.704 16:49:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 39334136 kB' 'MemAvailable: 43010688 kB' 'Buffers: 2696 kB' 'Cached: 15710948 kB' 'SwapCached: 0 kB' 'Active: 12664500 kB' 'Inactive: 3488624 kB' 'Active(anon): 12079784 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 442800 kB' 'Mapped: 187476 kB' 'Shmem: 11640304 kB' 'KReclaimable: 194100 kB' 'Slab: 554108 kB' 'SReclaimable: 194100 kB' 'SUnreclaim: 360008 kB' 'KernelStack: 12768 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 13208632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.704 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.704 16:49:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.705 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.705 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.705 16:49:50 -- setup/common.sh@33 -- # echo 1536 00:02:34.705 16:49:50 -- setup/common.sh@33 -- # return 0 00:02:34.705 16:49:50 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:34.705 16:49:50 -- setup/hugepages.sh@112 -- # get_nodes 00:02:34.705 16:49:50 -- setup/hugepages.sh@27 -- # local node 00:02:34.705 16:49:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.705 16:49:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:34.705 16:49:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.705 16:49:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:34.705 16:49:50 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:34.705 16:49:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:34.705 16:49:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:34.705 16:49:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:34.705 16:49:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:34.706 16:49:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.706 16:49:50 -- setup/common.sh@18 -- # local node=0 00:02:34.706 16:49:50 -- setup/common.sh@19 -- # local var val 00:02:34.706 16:49:50 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.706 16:49:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.706 16:49:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:34.706 16:49:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:34.706 16:49:50 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.706 16:49:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20746928 kB' 'MemUsed: 12130012 kB' 'SwapCached: 0 kB' 'Active: 6472128 kB' 'Inactive: 3327648 kB' 'Active(anon): 6239652 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9689260 kB' 'Mapped: 101796 kB' 'AnonPages: 113720 kB' 'Shmem: 6129136 kB' 'KernelStack: 6936 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96632 kB' 'Slab: 313720 kB' 'SReclaimable: 96632 kB' 'SUnreclaim: 217088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.706 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.706 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@33 -- # echo 0 00:02:34.707 16:49:50 -- setup/common.sh@33 -- # return 0 00:02:34.707 16:49:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:34.707 16:49:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:34.707 16:49:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:34.707 16:49:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:34.707 16:49:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.707 16:49:50 -- setup/common.sh@18 -- # local node=1 00:02:34.707 16:49:50 -- setup/common.sh@19 -- # local var val 00:02:34.707 16:49:50 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.707 16:49:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.707 16:49:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:34.707 16:49:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:34.707 16:49:50 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.707 16:49:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664776 kB' 'MemFree: 18585952 kB' 'MemUsed: 9078824 kB' 'SwapCached: 0 kB' 'Active: 6192308 kB' 'Inactive: 160976 kB' 'Active(anon): 5840068 kB' 'Inactive(anon): 0 kB' 'Active(file): 352240 kB' 'Inactive(file): 160976 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6024400 kB' 'Mapped: 85680 kB' 'AnonPages: 329060 kB' 'Shmem: 5511184 kB' 'KernelStack: 5816 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97404 kB' 'Slab: 240324 kB' 'SReclaimable: 97404 kB' 'SUnreclaim: 142920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.707 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.707 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # continue 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.708 16:49:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.708 16:49:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.708 16:49:50 -- setup/common.sh@33 -- # echo 0 00:02:34.708 16:49:50 -- setup/common.sh@33 -- # return 0 00:02:34.708 16:49:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:34.708 16:49:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:34.708 16:49:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:34.708 16:49:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:34.708 16:49:50 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:34.708 node0=512 expecting 512 00:02:34.708 16:49:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:34.708 16:49:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:34.708 16:49:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:34.708 16:49:50 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:34.708 node1=1024 expecting 1024 00:02:34.708 16:49:50 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:34.708 00:02:34.708 real 0m1.318s 00:02:34.708 user 0m0.563s 00:02:34.708 sys 0m0.713s 00:02:34.708 16:49:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:34.708 16:49:50 -- common/autotest_common.sh@10 -- # set +x 00:02:34.708 ************************************ 00:02:34.708 END TEST custom_alloc 00:02:34.708 ************************************ 00:02:34.708 16:49:50 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:34.708 16:49:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:34.708 16:49:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:34.708 16:49:50 -- common/autotest_common.sh@10 -- # set +x 00:02:34.967 ************************************ 00:02:34.967 START TEST no_shrink_alloc 00:02:34.967 ************************************ 00:02:34.967 16:49:50 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:02:34.967 16:49:50 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:34.967 16:49:50 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:34.967 16:49:50 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:34.967 16:49:50 -- setup/hugepages.sh@51 -- # shift 00:02:34.967 16:49:50 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:34.967 16:49:50 -- setup/hugepages.sh@52 -- # local node_ids 00:02:34.967 16:49:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:34.967 16:49:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:34.967 16:49:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:34.967 16:49:50 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:34.967 16:49:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:34.967 16:49:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:34.967 16:49:50 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:34.967 16:49:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:34.967 16:49:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:34.967 16:49:50 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:34.967 16:49:50 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:34.967 16:49:50 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:34.967 16:49:50 -- setup/hugepages.sh@73 -- # return 0 00:02:34.967 16:49:50 -- setup/hugepages.sh@198 -- # setup output 00:02:34.967 16:49:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.967 16:49:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:35.913 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:35.913 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:35.913 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:35.913 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:35.913 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:35.913 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:35.913 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:35.913 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:35.913 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:35.913 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:35.913 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:35.913 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:35.913 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:35.913 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:35.913 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:35.913 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:35.914 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:36.184 16:49:51 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:36.184 16:49:51 -- setup/hugepages.sh@89 -- # local node 00:02:36.184 16:49:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:36.184 16:49:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:36.184 16:49:51 -- setup/hugepages.sh@92 -- # local surp 00:02:36.184 16:49:51 -- setup/hugepages.sh@93 -- # local resv 00:02:36.184 16:49:51 -- setup/hugepages.sh@94 -- # local anon 00:02:36.184 16:49:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:36.184 16:49:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:36.184 16:49:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:36.184 16:49:51 -- setup/common.sh@18 -- # local node= 00:02:36.184 16:49:51 -- setup/common.sh@19 -- # local var val 00:02:36.184 16:49:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.184 16:49:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.184 16:49:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.184 16:49:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.184 16:49:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.184 16:49:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40377972 kB' 'MemAvailable: 44054428 kB' 'Buffers: 2696 kB' 'Cached: 15711020 kB' 'SwapCached: 0 kB' 'Active: 12663116 kB' 'Inactive: 3488624 kB' 'Active(anon): 12078400 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 441208 kB' 'Mapped: 187524 kB' 'Shmem: 11640376 kB' 'KReclaimable: 193908 kB' 'Slab: 553908 kB' 'SReclaimable: 193908 kB' 'SUnreclaim: 360000 kB' 'KernelStack: 12752 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13208804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196004 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.185 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.185 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.186 16:49:51 -- setup/common.sh@33 -- # echo 0 00:02:36.186 16:49:51 -- setup/common.sh@33 -- # return 0 00:02:36.186 16:49:51 -- setup/hugepages.sh@97 -- # anon=0 00:02:36.186 16:49:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:36.186 16:49:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.186 16:49:51 -- setup/common.sh@18 -- # local node= 00:02:36.186 16:49:51 -- setup/common.sh@19 -- # local var val 00:02:36.186 16:49:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.186 16:49:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.186 16:49:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.186 16:49:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.186 16:49:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.186 16:49:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40379280 kB' 'MemAvailable: 44055736 kB' 'Buffers: 2696 kB' 'Cached: 15711024 kB' 'SwapCached: 0 kB' 'Active: 12662720 kB' 'Inactive: 3488624 kB' 'Active(anon): 12078004 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 440808 kB' 'Mapped: 187584 kB' 'Shmem: 11640380 kB' 'KReclaimable: 193908 kB' 'Slab: 553924 kB' 'SReclaimable: 193908 kB' 'SUnreclaim: 360016 kB' 'KernelStack: 12720 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13208816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195972 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.186 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.186 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.187 16:49:51 -- setup/common.sh@33 -- # echo 0 00:02:36.187 16:49:51 -- setup/common.sh@33 -- # return 0 00:02:36.187 16:49:51 -- setup/hugepages.sh@99 -- # surp=0 00:02:36.187 16:49:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:36.187 16:49:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:36.187 16:49:51 -- setup/common.sh@18 -- # local node= 00:02:36.187 16:49:51 -- setup/common.sh@19 -- # local var val 00:02:36.187 16:49:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.187 16:49:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.187 16:49:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.187 16:49:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.187 16:49:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.187 16:49:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40379440 kB' 'MemAvailable: 44055896 kB' 'Buffers: 2696 kB' 'Cached: 15711036 kB' 'SwapCached: 0 kB' 'Active: 12662844 kB' 'Inactive: 3488624 kB' 'Active(anon): 12078128 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 440916 kB' 'Mapped: 187508 kB' 'Shmem: 11640392 kB' 'KReclaimable: 193908 kB' 'Slab: 553912 kB' 'SReclaimable: 193908 kB' 'SUnreclaim: 360004 kB' 'KernelStack: 12768 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13208832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195988 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.187 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.187 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.188 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.188 16:49:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.189 16:49:51 -- setup/common.sh@33 -- # echo 0 00:02:36.189 16:49:51 -- setup/common.sh@33 -- # return 0 00:02:36.189 16:49:51 -- setup/hugepages.sh@100 -- # resv=0 00:02:36.189 16:49:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:36.189 nr_hugepages=1024 00:02:36.189 16:49:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:36.189 resv_hugepages=0 00:02:36.189 16:49:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:36.189 surplus_hugepages=0 00:02:36.189 16:49:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:36.189 anon_hugepages=0 00:02:36.189 16:49:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.189 16:49:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:36.189 16:49:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:36.189 16:49:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:36.189 16:49:51 -- setup/common.sh@18 -- # local node= 00:02:36.189 16:49:51 -- setup/common.sh@19 -- # local var val 00:02:36.189 16:49:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.189 16:49:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.189 16:49:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.189 16:49:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.189 16:49:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.189 16:49:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40379440 kB' 'MemAvailable: 44055896 kB' 'Buffers: 2696 kB' 'Cached: 15711048 kB' 'SwapCached: 0 kB' 'Active: 12662848 kB' 'Inactive: 3488624 kB' 'Active(anon): 12078132 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 440916 kB' 'Mapped: 187508 kB' 'Shmem: 11640404 kB' 'KReclaimable: 193908 kB' 'Slab: 553912 kB' 'SReclaimable: 193908 kB' 'SUnreclaim: 360004 kB' 'KernelStack: 12768 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13208844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195988 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.189 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.189 16:49:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.190 16:49:51 -- setup/common.sh@33 -- # echo 1024 00:02:36.190 16:49:51 -- setup/common.sh@33 -- # return 0 00:02:36.190 16:49:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.190 16:49:51 -- setup/hugepages.sh@112 -- # get_nodes 00:02:36.190 16:49:51 -- setup/hugepages.sh@27 -- # local node 00:02:36.190 16:49:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.190 16:49:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:36.190 16:49:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.190 16:49:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:36.190 16:49:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:36.190 16:49:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:36.190 16:49:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:36.190 16:49:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:36.190 16:49:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:36.190 16:49:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.190 16:49:51 -- setup/common.sh@18 -- # local node=0 00:02:36.190 16:49:51 -- setup/common.sh@19 -- # local var val 00:02:36.190 16:49:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.190 16:49:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.190 16:49:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:36.190 16:49:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:36.190 16:49:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.190 16:49:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19699076 kB' 'MemUsed: 13177864 kB' 'SwapCached: 0 kB' 'Active: 6470100 kB' 'Inactive: 3327648 kB' 'Active(anon): 6237624 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9689328 kB' 'Mapped: 101828 kB' 'AnonPages: 111548 kB' 'Shmem: 6129204 kB' 'KernelStack: 6952 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96600 kB' 'Slab: 313604 kB' 'SReclaimable: 96600 kB' 'SUnreclaim: 217004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.190 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.190 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # continue 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.191 16:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.191 16:49:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.191 16:49:51 -- setup/common.sh@33 -- # echo 0 00:02:36.191 16:49:51 -- setup/common.sh@33 -- # return 0 00:02:36.191 16:49:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:36.191 16:49:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:36.191 16:49:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:36.191 16:49:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:36.191 16:49:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:36.191 node0=1024 expecting 1024 00:02:36.191 16:49:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:36.191 16:49:51 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:36.191 16:49:51 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:36.191 16:49:51 -- setup/hugepages.sh@202 -- # setup output 00:02:36.191 16:49:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.191 16:49:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:37.136 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.136 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:37.136 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.136 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.136 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.136 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.136 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.136 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.136 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.136 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.136 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.136 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.136 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.136 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.136 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.397 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.397 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.397 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:37.397 16:49:52 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:37.397 16:49:52 -- setup/hugepages.sh@89 -- # local node 00:02:37.397 16:49:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:37.397 16:49:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:37.397 16:49:52 -- setup/hugepages.sh@92 -- # local surp 00:02:37.397 16:49:52 -- setup/hugepages.sh@93 -- # local resv 00:02:37.397 16:49:52 -- setup/hugepages.sh@94 -- # local anon 00:02:37.397 16:49:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:37.397 16:49:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:37.397 16:49:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:37.397 16:49:52 -- setup/common.sh@18 -- # local node= 00:02:37.397 16:49:52 -- setup/common.sh@19 -- # local var val 00:02:37.397 16:49:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:37.397 16:49:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.397 16:49:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.397 16:49:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.397 16:49:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.397 16:49:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.397 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.397 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.397 16:49:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40357380 kB' 'MemAvailable: 44033836 kB' 'Buffers: 2696 kB' 'Cached: 15711092 kB' 'SwapCached: 0 kB' 'Active: 12664152 kB' 'Inactive: 3488624 kB' 'Active(anon): 12079436 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 442228 kB' 'Mapped: 187536 kB' 'Shmem: 11640448 kB' 'KReclaimable: 193908 kB' 'Slab: 553824 kB' 'SReclaimable: 193908 kB' 'SUnreclaim: 359916 kB' 'KernelStack: 12800 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13209020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196148 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:37.397 16:49:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.397 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.397 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.397 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.397 16:49:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.397 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.397 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.397 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.397 16:49:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.397 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.397 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.397 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.397 16:49:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.397 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.398 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.398 16:49:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.398 16:49:52 -- setup/common.sh@33 -- # echo 0 00:02:37.398 16:49:52 -- setup/common.sh@33 -- # return 0 00:02:37.398 16:49:52 -- setup/hugepages.sh@97 -- # anon=0 00:02:37.398 16:49:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:37.398 16:49:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.398 16:49:52 -- setup/common.sh@18 -- # local node= 00:02:37.398 16:49:52 -- setup/common.sh@19 -- # local var val 00:02:37.398 16:49:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:37.398 16:49:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.398 16:49:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.398 16:49:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.398 16:49:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.399 16:49:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40369536 kB' 'MemAvailable: 44045992 kB' 'Buffers: 2696 kB' 'Cached: 15711096 kB' 'SwapCached: 0 kB' 'Active: 12664112 kB' 'Inactive: 3488624 kB' 'Active(anon): 12079396 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 442196 kB' 'Mapped: 187536 kB' 'Shmem: 11640452 kB' 'KReclaimable: 193908 kB' 'Slab: 553808 kB' 'SReclaimable: 193908 kB' 'SUnreclaim: 359900 kB' 'KernelStack: 12752 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13209032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196100 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:52 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.399 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.399 16:49:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.399 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.399 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.400 16:49:53 -- setup/common.sh@33 -- # echo 0 00:02:37.400 16:49:53 -- setup/common.sh@33 -- # return 0 00:02:37.400 16:49:53 -- setup/hugepages.sh@99 -- # surp=0 00:02:37.400 16:49:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:37.400 16:49:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:37.400 16:49:53 -- setup/common.sh@18 -- # local node= 00:02:37.400 16:49:53 -- setup/common.sh@19 -- # local var val 00:02:37.400 16:49:53 -- setup/common.sh@20 -- # local mem_f mem 00:02:37.400 16:49:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.400 16:49:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.400 16:49:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.400 16:49:53 -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.400 16:49:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40377220 kB' 'MemAvailable: 44053676 kB' 'Buffers: 2696 kB' 'Cached: 15711100 kB' 'SwapCached: 0 kB' 'Active: 12663356 kB' 'Inactive: 3488624 kB' 'Active(anon): 12078640 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 441396 kB' 'Mapped: 187516 kB' 'Shmem: 11640456 kB' 'KReclaimable: 193908 kB' 'Slab: 553864 kB' 'SReclaimable: 193908 kB' 'SUnreclaim: 359956 kB' 'KernelStack: 12816 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13209048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196100 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.400 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.400 16:49:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.401 16:49:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.401 16:49:53 -- setup/common.sh@33 -- # echo 0 00:02:37.401 16:49:53 -- setup/common.sh@33 -- # return 0 00:02:37.401 16:49:53 -- setup/hugepages.sh@100 -- # resv=0 00:02:37.401 16:49:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:37.401 nr_hugepages=1024 00:02:37.401 16:49:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:37.401 resv_hugepages=0 00:02:37.401 16:49:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:37.401 surplus_hugepages=0 00:02:37.401 16:49:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:37.401 anon_hugepages=0 00:02:37.401 16:49:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.401 16:49:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:37.401 16:49:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:37.401 16:49:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:37.401 16:49:53 -- setup/common.sh@18 -- # local node= 00:02:37.401 16:49:53 -- setup/common.sh@19 -- # local var val 00:02:37.401 16:49:53 -- setup/common.sh@20 -- # local mem_f mem 00:02:37.401 16:49:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.401 16:49:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.401 16:49:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.401 16:49:53 -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.401 16:49:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.401 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40377620 kB' 'MemAvailable: 44054076 kB' 'Buffers: 2696 kB' 'Cached: 15711100 kB' 'SwapCached: 0 kB' 'Active: 12664296 kB' 'Inactive: 3488624 kB' 'Active(anon): 12079580 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 442372 kB' 'Mapped: 187952 kB' 'Shmem: 11640456 kB' 'KReclaimable: 193908 kB' 'Slab: 553864 kB' 'SReclaimable: 193908 kB' 'SUnreclaim: 359956 kB' 'KernelStack: 12816 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13210416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196100 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 14944256 kB' 'DirectMap1G: 52428800 kB' 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.402 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.402 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.403 16:49:53 -- setup/common.sh@33 -- # echo 1024 00:02:37.403 16:49:53 -- setup/common.sh@33 -- # return 0 00:02:37.403 16:49:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.403 16:49:53 -- setup/hugepages.sh@112 -- # get_nodes 00:02:37.403 16:49:53 -- setup/hugepages.sh@27 -- # local node 00:02:37.403 16:49:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.403 16:49:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:37.403 16:49:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.403 16:49:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:37.403 16:49:53 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:37.403 16:49:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:37.403 16:49:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.403 16:49:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.403 16:49:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:37.403 16:49:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.403 16:49:53 -- setup/common.sh@18 -- # local node=0 00:02:37.403 16:49:53 -- setup/common.sh@19 -- # local var val 00:02:37.403 16:49:53 -- setup/common.sh@20 -- # local mem_f mem 00:02:37.403 16:49:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.403 16:49:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:37.403 16:49:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:37.403 16:49:53 -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.403 16:49:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19710720 kB' 'MemUsed: 13166220 kB' 'SwapCached: 0 kB' 'Active: 6470256 kB' 'Inactive: 3327648 kB' 'Active(anon): 6237780 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9689408 kB' 'Mapped: 101836 kB' 'AnonPages: 111688 kB' 'Shmem: 6129284 kB' 'KernelStack: 6952 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96600 kB' 'Slab: 313536 kB' 'SReclaimable: 96600 kB' 'SUnreclaim: 216936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.403 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.403 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # continue 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.404 16:49:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.404 16:49:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.404 16:49:53 -- setup/common.sh@33 -- # echo 0 00:02:37.404 16:49:53 -- setup/common.sh@33 -- # return 0 00:02:37.404 16:49:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.404 16:49:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.404 16:49:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.404 16:49:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.404 16:49:53 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:37.404 node0=1024 expecting 1024 00:02:37.404 16:49:53 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:37.404 00:02:37.404 real 0m2.612s 00:02:37.404 user 0m1.075s 00:02:37.404 sys 0m1.453s 00:02:37.404 16:49:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:37.404 16:49:53 -- common/autotest_common.sh@10 -- # set +x 00:02:37.404 ************************************ 00:02:37.404 END TEST no_shrink_alloc 00:02:37.404 ************************************ 00:02:37.404 16:49:53 -- setup/hugepages.sh@217 -- # clear_hp 00:02:37.404 16:49:53 -- setup/hugepages.sh@37 -- # local node hp 00:02:37.404 16:49:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:37.404 16:49:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.404 16:49:53 -- setup/hugepages.sh@41 -- # echo 0 00:02:37.404 16:49:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.404 16:49:53 -- setup/hugepages.sh@41 -- # echo 0 00:02:37.662 16:49:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:37.662 16:49:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.662 16:49:53 -- setup/hugepages.sh@41 -- # echo 0 00:02:37.662 16:49:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.662 16:49:53 -- setup/hugepages.sh@41 -- # echo 0 00:02:37.662 16:49:53 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:37.662 16:49:53 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:37.662 00:02:37.662 real 0m11.215s 00:02:37.662 user 0m4.223s 00:02:37.662 sys 0m5.707s 00:02:37.662 16:49:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:37.662 16:49:53 -- common/autotest_common.sh@10 -- # set +x 00:02:37.662 ************************************ 00:02:37.662 END TEST hugepages 00:02:37.662 ************************************ 00:02:37.662 16:49:53 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:37.662 16:49:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:37.662 16:49:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:37.662 16:49:53 -- common/autotest_common.sh@10 -- # set +x 00:02:37.662 ************************************ 00:02:37.662 START TEST driver 00:02:37.662 ************************************ 00:02:37.662 16:49:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:37.662 * Looking for test storage... 00:02:37.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:37.662 16:49:53 -- setup/driver.sh@68 -- # setup reset 00:02:37.662 16:49:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:37.662 16:49:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.196 16:49:55 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:40.196 16:49:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:40.196 16:49:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:40.196 16:49:55 -- common/autotest_common.sh@10 -- # set +x 00:02:40.196 ************************************ 00:02:40.196 START TEST guess_driver 00:02:40.196 ************************************ 00:02:40.196 16:49:55 -- common/autotest_common.sh@1111 -- # guess_driver 00:02:40.196 16:49:55 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:40.196 16:49:55 -- setup/driver.sh@47 -- # local fail=0 00:02:40.196 16:49:55 -- setup/driver.sh@49 -- # pick_driver 00:02:40.196 16:49:55 -- setup/driver.sh@36 -- # vfio 00:02:40.197 16:49:55 -- setup/driver.sh@21 -- # local iommu_grups 00:02:40.197 16:49:55 -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:40.197 16:49:55 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:40.197 16:49:55 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:40.197 16:49:55 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:40.197 16:49:55 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:40.197 16:49:55 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:40.197 16:49:55 -- setup/driver.sh@14 -- # mod vfio_pci 00:02:40.197 16:49:55 -- setup/driver.sh@12 -- # dep vfio_pci 00:02:40.197 16:49:55 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:40.197 16:49:55 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:40.197 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:40.197 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:40.197 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:40.197 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:40.197 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:40.197 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:40.197 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:40.197 16:49:55 -- setup/driver.sh@30 -- # return 0 00:02:40.197 16:49:55 -- setup/driver.sh@37 -- # echo vfio-pci 00:02:40.197 16:49:55 -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:40.197 16:49:55 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:40.197 16:49:55 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:40.197 Looking for driver=vfio-pci 00:02:40.197 16:49:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:40.197 16:49:55 -- setup/driver.sh@45 -- # setup output config 00:02:40.197 16:49:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.197 16:49:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:41.132 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.132 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.132 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.132 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.132 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.132 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.132 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.132 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.132 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.132 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.132 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.132 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.132 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.132 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.132 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.132 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.132 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.132 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.132 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.132 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.132 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.132 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.132 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.132 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.132 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.132 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.132 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.132 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.132 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.132 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.132 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.132 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.132 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.132 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.132 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.132 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.392 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.392 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.392 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.392 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.392 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.392 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.392 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.392 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.392 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.392 16:49:56 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.392 16:49:56 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.392 16:49:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.329 16:49:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.329 16:49:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.329 16:49:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.329 16:49:57 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:42.329 16:49:57 -- setup/driver.sh@65 -- # setup reset 00:02:42.329 16:49:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:42.329 16:49:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.863 00:02:44.863 real 0m4.467s 00:02:44.863 user 0m0.986s 00:02:44.863 sys 0m1.618s 00:02:44.863 16:50:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:44.863 16:50:00 -- common/autotest_common.sh@10 -- # set +x 00:02:44.863 ************************************ 00:02:44.863 END TEST guess_driver 00:02:44.863 ************************************ 00:02:44.863 00:02:44.863 real 0m6.920s 00:02:44.863 user 0m1.579s 00:02:44.863 sys 0m2.609s 00:02:44.863 16:50:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:44.863 16:50:00 -- common/autotest_common.sh@10 -- # set +x 00:02:44.863 ************************************ 00:02:44.863 END TEST driver 00:02:44.863 ************************************ 00:02:44.863 16:50:00 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:44.863 16:50:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:44.863 16:50:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:44.863 16:50:00 -- common/autotest_common.sh@10 -- # set +x 00:02:44.863 ************************************ 00:02:44.863 START TEST devices 00:02:44.863 ************************************ 00:02:44.863 16:50:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:44.863 * Looking for test storage... 00:02:44.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:44.863 16:50:00 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:44.863 16:50:00 -- setup/devices.sh@192 -- # setup reset 00:02:44.864 16:50:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.864 16:50:00 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:46.256 16:50:01 -- setup/devices.sh@194 -- # get_zoned_devs 00:02:46.256 16:50:01 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:46.256 16:50:01 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:46.256 16:50:01 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:46.256 16:50:01 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:46.256 16:50:01 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:46.256 16:50:01 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:46.256 16:50:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:46.256 16:50:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:46.256 16:50:01 -- setup/devices.sh@196 -- # blocks=() 00:02:46.256 16:50:01 -- setup/devices.sh@196 -- # declare -a blocks 00:02:46.256 16:50:01 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:46.256 16:50:01 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:46.256 16:50:01 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:46.256 16:50:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:46.256 16:50:01 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:46.256 16:50:01 -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:46.256 16:50:01 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:46.256 16:50:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:46.256 16:50:01 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:46.256 16:50:01 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:46.256 16:50:01 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:46.256 No valid GPT data, bailing 00:02:46.256 16:50:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:46.256 16:50:01 -- scripts/common.sh@391 -- # pt= 00:02:46.256 16:50:01 -- scripts/common.sh@392 -- # return 1 00:02:46.256 16:50:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:46.256 16:50:01 -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:46.256 16:50:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:46.256 16:50:01 -- setup/common.sh@80 -- # echo 1000204886016 00:02:46.256 16:50:01 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:46.256 16:50:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:46.256 16:50:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:46.256 16:50:01 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:46.256 16:50:01 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:46.256 16:50:01 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:46.256 16:50:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:46.256 16:50:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:46.256 16:50:01 -- common/autotest_common.sh@10 -- # set +x 00:02:46.256 ************************************ 00:02:46.256 START TEST nvme_mount 00:02:46.256 ************************************ 00:02:46.256 16:50:01 -- common/autotest_common.sh@1111 -- # nvme_mount 00:02:46.256 16:50:01 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:46.256 16:50:01 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:46.256 16:50:01 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:46.256 16:50:01 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:46.256 16:50:01 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:46.256 16:50:01 -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:46.256 16:50:01 -- setup/common.sh@40 -- # local part_no=1 00:02:46.256 16:50:01 -- setup/common.sh@41 -- # local size=1073741824 00:02:46.256 16:50:01 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:46.256 16:50:01 -- setup/common.sh@44 -- # parts=() 00:02:46.256 16:50:01 -- setup/common.sh@44 -- # local parts 00:02:46.256 16:50:01 -- setup/common.sh@46 -- # (( part = 1 )) 00:02:46.256 16:50:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:46.256 16:50:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:46.256 16:50:01 -- setup/common.sh@46 -- # (( part++ )) 00:02:46.256 16:50:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:46.256 16:50:01 -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:46.256 16:50:01 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:46.256 16:50:01 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:47.195 Creating new GPT entries in memory. 00:02:47.195 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:47.195 other utilities. 00:02:47.195 16:50:02 -- setup/common.sh@57 -- # (( part = 1 )) 00:02:47.195 16:50:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:47.195 16:50:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:47.195 16:50:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:47.195 16:50:02 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:48.135 Creating new GPT entries in memory. 00:02:48.135 The operation has completed successfully. 00:02:48.135 16:50:03 -- setup/common.sh@57 -- # (( part++ )) 00:02:48.135 16:50:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:48.135 16:50:03 -- setup/common.sh@62 -- # wait 1556311 00:02:48.135 16:50:03 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:48.135 16:50:03 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:48.135 16:50:03 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:48.135 16:50:03 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:48.135 16:50:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:48.394 16:50:03 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:48.394 16:50:03 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:48.394 16:50:03 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:48.394 16:50:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:48.394 16:50:03 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:48.394 16:50:03 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:48.394 16:50:03 -- setup/devices.sh@53 -- # local found=0 00:02:48.394 16:50:03 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:48.394 16:50:03 -- setup/devices.sh@56 -- # : 00:02:48.394 16:50:03 -- setup/devices.sh@59 -- # local pci status 00:02:48.394 16:50:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:48.394 16:50:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:48.394 16:50:03 -- setup/devices.sh@47 -- # setup output config 00:02:48.394 16:50:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.394 16:50:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:49.333 16:50:04 -- setup/devices.sh@63 -- # found=1 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.333 16:50:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:49.333 16:50:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.593 16:50:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:49.593 16:50:05 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:49.593 16:50:05 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.593 16:50:05 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:49.593 16:50:05 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:49.593 16:50:05 -- setup/devices.sh@110 -- # cleanup_nvme 00:02:49.593 16:50:05 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.593 16:50:05 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.593 16:50:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:49.593 16:50:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:49.593 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:49.593 16:50:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:49.593 16:50:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:49.851 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:49.851 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:49.851 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:49.851 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:49.851 16:50:05 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:49.851 16:50:05 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:49.851 16:50:05 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.851 16:50:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:49.851 16:50:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:49.851 16:50:05 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.851 16:50:05 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:49.851 16:50:05 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:49.851 16:50:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:49.851 16:50:05 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.851 16:50:05 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:49.851 16:50:05 -- setup/devices.sh@53 -- # local found=0 00:02:49.851 16:50:05 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:49.851 16:50:05 -- setup/devices.sh@56 -- # : 00:02:49.851 16:50:05 -- setup/devices.sh@59 -- # local pci status 00:02:49.851 16:50:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.851 16:50:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:49.851 16:50:05 -- setup/devices.sh@47 -- # setup output config 00:02:49.851 16:50:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.851 16:50:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:02:50.787 16:50:06 -- setup/devices.sh@63 -- # found=1 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.787 16:50:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.787 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:51.045 16:50:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:51.045 16:50:06 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:51.045 16:50:06 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:51.045 16:50:06 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:51.045 16:50:06 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:51.045 16:50:06 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:51.045 16:50:06 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:02:51.045 16:50:06 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:51.045 16:50:06 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:02:51.045 16:50:06 -- setup/devices.sh@50 -- # local mount_point= 00:02:51.045 16:50:06 -- setup/devices.sh@51 -- # local test_file= 00:02:51.045 16:50:06 -- setup/devices.sh@53 -- # local found=0 00:02:51.045 16:50:06 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:51.045 16:50:06 -- setup/devices.sh@59 -- # local pci status 00:02:51.045 16:50:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:51.045 16:50:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:51.045 16:50:06 -- setup/devices.sh@47 -- # setup output config 00:02:51.045 16:50:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.045 16:50:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:02:52.424 16:50:07 -- setup/devices.sh@63 -- # found=1 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.424 16:50:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:52.424 16:50:07 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:52.424 16:50:07 -- setup/devices.sh@68 -- # return 0 00:02:52.424 16:50:07 -- setup/devices.sh@128 -- # cleanup_nvme 00:02:52.424 16:50:07 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.424 16:50:07 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:52.424 16:50:07 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:52.424 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:52.424 00:02:52.424 real 0m6.102s 00:02:52.424 user 0m1.420s 00:02:52.424 sys 0m2.283s 00:02:52.424 16:50:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:52.424 16:50:07 -- common/autotest_common.sh@10 -- # set +x 00:02:52.424 ************************************ 00:02:52.424 END TEST nvme_mount 00:02:52.424 ************************************ 00:02:52.424 16:50:07 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:02:52.424 16:50:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:52.424 16:50:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:52.424 16:50:07 -- common/autotest_common.sh@10 -- # set +x 00:02:52.424 ************************************ 00:02:52.424 START TEST dm_mount 00:02:52.424 ************************************ 00:02:52.424 16:50:07 -- common/autotest_common.sh@1111 -- # dm_mount 00:02:52.424 16:50:07 -- setup/devices.sh@144 -- # pv=nvme0n1 00:02:52.424 16:50:07 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:02:52.424 16:50:07 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:02:52.424 16:50:07 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:02:52.424 16:50:07 -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:52.424 16:50:07 -- setup/common.sh@40 -- # local part_no=2 00:02:52.424 16:50:07 -- setup/common.sh@41 -- # local size=1073741824 00:02:52.424 16:50:07 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:52.424 16:50:07 -- setup/common.sh@44 -- # parts=() 00:02:52.424 16:50:07 -- setup/common.sh@44 -- # local parts 00:02:52.424 16:50:07 -- setup/common.sh@46 -- # (( part = 1 )) 00:02:52.424 16:50:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:52.424 16:50:07 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:52.424 16:50:07 -- setup/common.sh@46 -- # (( part++ )) 00:02:52.424 16:50:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:52.424 16:50:07 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:52.424 16:50:07 -- setup/common.sh@46 -- # (( part++ )) 00:02:52.424 16:50:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:52.424 16:50:07 -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:52.424 16:50:07 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:52.424 16:50:07 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:02:53.359 Creating new GPT entries in memory. 00:02:53.359 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:53.359 other utilities. 00:02:53.359 16:50:09 -- setup/common.sh@57 -- # (( part = 1 )) 00:02:53.359 16:50:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:53.359 16:50:09 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:53.359 16:50:09 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:53.359 16:50:09 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:54.738 Creating new GPT entries in memory. 00:02:54.738 The operation has completed successfully. 00:02:54.738 16:50:10 -- setup/common.sh@57 -- # (( part++ )) 00:02:54.738 16:50:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:54.738 16:50:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:54.738 16:50:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:54.738 16:50:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:02:55.675 The operation has completed successfully. 00:02:55.675 16:50:11 -- setup/common.sh@57 -- # (( part++ )) 00:02:55.675 16:50:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:55.675 16:50:11 -- setup/common.sh@62 -- # wait 1558586 00:02:55.675 16:50:11 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:02:55.675 16:50:11 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:55.675 16:50:11 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:55.675 16:50:11 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:02:55.675 16:50:11 -- setup/devices.sh@160 -- # for t in {1..5} 00:02:55.675 16:50:11 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:55.675 16:50:11 -- setup/devices.sh@161 -- # break 00:02:55.675 16:50:11 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:55.675 16:50:11 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:02:55.675 16:50:11 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:02:55.675 16:50:11 -- setup/devices.sh@166 -- # dm=dm-0 00:02:55.675 16:50:11 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:02:55.675 16:50:11 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:02:55.675 16:50:11 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:55.675 16:50:11 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:02:55.675 16:50:11 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:55.675 16:50:11 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:55.676 16:50:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:02:55.676 16:50:11 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:55.676 16:50:11 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:55.676 16:50:11 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:55.676 16:50:11 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:02:55.676 16:50:11 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:55.676 16:50:11 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:55.676 16:50:11 -- setup/devices.sh@53 -- # local found=0 00:02:55.676 16:50:11 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:02:55.676 16:50:11 -- setup/devices.sh@56 -- # : 00:02:55.676 16:50:11 -- setup/devices.sh@59 -- # local pci status 00:02:55.676 16:50:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.676 16:50:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:55.676 16:50:11 -- setup/devices.sh@47 -- # setup output config 00:02:55.676 16:50:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.676 16:50:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:02:56.612 16:50:12 -- setup/devices.sh@63 -- # found=1 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.612 16:50:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.612 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.871 16:50:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:56.871 16:50:12 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:02:56.871 16:50:12 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:56.871 16:50:12 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:02:56.871 16:50:12 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:56.871 16:50:12 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:56.871 16:50:12 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:02:56.871 16:50:12 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:56.871 16:50:12 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:02:56.871 16:50:12 -- setup/devices.sh@50 -- # local mount_point= 00:02:56.871 16:50:12 -- setup/devices.sh@51 -- # local test_file= 00:02:56.871 16:50:12 -- setup/devices.sh@53 -- # local found=0 00:02:56.871 16:50:12 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:56.871 16:50:12 -- setup/devices.sh@59 -- # local pci status 00:02:56.871 16:50:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.871 16:50:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:56.871 16:50:12 -- setup/devices.sh@47 -- # setup output config 00:02:56.871 16:50:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.871 16:50:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:02:57.806 16:50:13 -- setup/devices.sh@63 -- # found=1 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.806 16:50:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:57.806 16:50:13 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:57.806 16:50:13 -- setup/devices.sh@68 -- # return 0 00:02:57.806 16:50:13 -- setup/devices.sh@187 -- # cleanup_dm 00:02:57.806 16:50:13 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:57.806 16:50:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:02:57.806 16:50:13 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:02:57.806 16:50:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:02:57.806 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:57.806 16:50:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:02:57.806 16:50:13 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:02:58.064 00:02:58.064 real 0m5.526s 00:02:58.064 user 0m0.907s 00:02:58.064 sys 0m1.477s 00:02:58.064 16:50:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:58.064 16:50:13 -- common/autotest_common.sh@10 -- # set +x 00:02:58.064 ************************************ 00:02:58.064 END TEST dm_mount 00:02:58.064 ************************************ 00:02:58.064 16:50:13 -- setup/devices.sh@1 -- # cleanup 00:02:58.064 16:50:13 -- setup/devices.sh@11 -- # cleanup_nvme 00:02:58.064 16:50:13 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.064 16:50:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:58.064 16:50:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:58.064 16:50:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:58.064 16:50:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:58.356 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:58.356 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:58.356 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:58.356 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:58.356 16:50:13 -- setup/devices.sh@12 -- # cleanup_dm 00:02:58.356 16:50:13 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:58.356 16:50:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:02:58.356 16:50:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:58.356 16:50:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:02:58.356 16:50:13 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:02:58.356 16:50:13 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:02:58.356 00:02:58.356 real 0m13.568s 00:02:58.356 user 0m2.990s 00:02:58.356 sys 0m4.774s 00:02:58.356 16:50:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:58.356 16:50:13 -- common/autotest_common.sh@10 -- # set +x 00:02:58.356 ************************************ 00:02:58.356 END TEST devices 00:02:58.356 ************************************ 00:02:58.356 00:02:58.356 real 0m42.456s 00:02:58.356 user 0m12.199s 00:02:58.356 sys 0m18.530s 00:02:58.356 16:50:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:58.356 16:50:13 -- common/autotest_common.sh@10 -- # set +x 00:02:58.356 ************************************ 00:02:58.356 END TEST setup.sh 00:02:58.356 ************************************ 00:02:58.356 16:50:13 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:59.291 Hugepages 00:02:59.291 node hugesize free / total 00:02:59.291 node0 1048576kB 0 / 0 00:02:59.291 node0 2048kB 2048 / 2048 00:02:59.291 node1 1048576kB 0 / 0 00:02:59.291 node1 2048kB 0 / 0 00:02:59.291 00:02:59.291 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:59.291 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:59.291 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:59.291 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:59.291 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:59.291 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:59.291 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:59.549 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:59.549 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:59.549 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:59.549 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:59.549 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:59.549 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:59.549 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:59.549 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:59.549 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:59.549 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:59.549 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:59.549 16:50:15 -- spdk/autotest.sh@130 -- # uname -s 00:02:59.549 16:50:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:02:59.549 16:50:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:02:59.549 16:50:15 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:00.483 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:00.483 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:00.483 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:00.483 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:00.483 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:00.483 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:00.483 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:00.483 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:00.483 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:00.483 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:00.483 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:00.483 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:00.483 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:00.483 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:00.483 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:00.483 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:01.857 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:01.857 16:50:17 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:02.792 16:50:18 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:02.792 16:50:18 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:02.792 16:50:18 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:02.792 16:50:18 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:02.793 16:50:18 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:02.793 16:50:18 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:02.793 16:50:18 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:02.793 16:50:18 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:02.793 16:50:18 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:02.793 16:50:18 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:02.793 16:50:18 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:03:02.793 16:50:18 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.169 Waiting for block devices as requested 00:03:04.169 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:04.169 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:04.169 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:04.169 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:04.169 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:04.169 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:04.428 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:04.428 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:04.428 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:04.428 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:04.686 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:04.686 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:04.686 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:04.686 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:04.945 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:04.945 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:04.945 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:05.204 16:50:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:05.204 16:50:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:05.204 16:50:20 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:03:05.204 16:50:20 -- common/autotest_common.sh@1488 -- # grep 0000:88:00.0/nvme/nvme 00:03:05.204 16:50:20 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:05.204 16:50:20 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:05.204 16:50:20 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:05.204 16:50:20 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:05.204 16:50:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:05.204 16:50:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:05.204 16:50:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:05.204 16:50:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:05.204 16:50:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:05.204 16:50:20 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:05.204 16:50:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:05.204 16:50:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:05.204 16:50:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:05.204 16:50:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:05.204 16:50:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:05.204 16:50:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:05.204 16:50:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:05.204 16:50:20 -- common/autotest_common.sh@1543 -- # continue 00:03:05.204 16:50:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:05.204 16:50:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:05.204 16:50:20 -- common/autotest_common.sh@10 -- # set +x 00:03:05.204 16:50:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:05.204 16:50:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:05.204 16:50:20 -- common/autotest_common.sh@10 -- # set +x 00:03:05.204 16:50:20 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:06.138 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:06.138 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:06.138 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:06.138 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:06.138 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:06.138 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:06.138 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:06.138 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:06.138 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:06.138 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:06.138 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:06.138 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:06.138 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:06.396 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:06.396 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:06.396 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:07.330 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:07.330 16:50:22 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:07.330 16:50:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:07.330 16:50:22 -- common/autotest_common.sh@10 -- # set +x 00:03:07.330 16:50:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:07.330 16:50:22 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:07.330 16:50:22 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:07.330 16:50:22 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:07.330 16:50:22 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:07.330 16:50:22 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:07.330 16:50:22 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:07.330 16:50:22 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:07.330 16:50:22 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:07.330 16:50:22 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:07.330 16:50:22 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:07.588 16:50:23 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:07.588 16:50:23 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:03:07.588 16:50:23 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:07.588 16:50:23 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:07.588 16:50:23 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:07.588 16:50:23 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:07.588 16:50:23 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:07.588 16:50:23 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:88:00.0 00:03:07.588 16:50:23 -- common/autotest_common.sh@1578 -- # [[ -z 0000:88:00.0 ]] 00:03:07.588 16:50:23 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=1563881 00:03:07.588 16:50:23 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:07.588 16:50:23 -- common/autotest_common.sh@1584 -- # waitforlisten 1563881 00:03:07.588 16:50:23 -- common/autotest_common.sh@817 -- # '[' -z 1563881 ']' 00:03:07.588 16:50:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:07.588 16:50:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:07.588 16:50:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:07.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:07.588 16:50:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:07.588 16:50:23 -- common/autotest_common.sh@10 -- # set +x 00:03:07.588 [2024-04-18 16:50:23.124356] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:03:07.588 [2024-04-18 16:50:23.124462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1563881 ] 00:03:07.588 EAL: No free 2048 kB hugepages reported on node 1 00:03:07.588 [2024-04-18 16:50:23.186927] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:07.845 [2024-04-18 16:50:23.300065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:08.409 16:50:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:08.409 16:50:24 -- common/autotest_common.sh@850 -- # return 0 00:03:08.409 16:50:24 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:08.409 16:50:24 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:08.409 16:50:24 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:11.689 nvme0n1 00:03:11.689 16:50:27 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:11.689 [2024-04-18 16:50:27.358305] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:11.689 [2024-04-18 16:50:27.358352] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:11.689 request: 00:03:11.689 { 00:03:11.689 "nvme_ctrlr_name": "nvme0", 00:03:11.689 "password": "test", 00:03:11.689 "method": "bdev_nvme_opal_revert", 00:03:11.689 "req_id": 1 00:03:11.689 } 00:03:11.689 Got JSON-RPC error response 00:03:11.689 response: 00:03:11.689 { 00:03:11.689 "code": -32603, 00:03:11.689 "message": "Internal error" 00:03:11.689 } 00:03:11.689 16:50:27 -- common/autotest_common.sh@1590 -- # true 00:03:11.689 16:50:27 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:11.689 16:50:27 -- common/autotest_common.sh@1594 -- # killprocess 1563881 00:03:11.689 16:50:27 -- common/autotest_common.sh@936 -- # '[' -z 1563881 ']' 00:03:11.689 16:50:27 -- common/autotest_common.sh@940 -- # kill -0 1563881 00:03:11.689 16:50:27 -- common/autotest_common.sh@941 -- # uname 00:03:11.689 16:50:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:11.689 16:50:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1563881 00:03:11.947 16:50:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:11.947 16:50:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:11.947 16:50:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1563881' 00:03:11.947 killing process with pid 1563881 00:03:11.947 16:50:27 -- common/autotest_common.sh@955 -- # kill 1563881 00:03:11.947 16:50:27 -- common/autotest_common.sh@960 -- # wait 1563881 00:03:13.844 16:50:29 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:13.844 16:50:29 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:13.844 16:50:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:13.844 16:50:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:13.844 16:50:29 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:13.844 16:50:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:13.844 16:50:29 -- common/autotest_common.sh@10 -- # set +x 00:03:13.844 16:50:29 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:13.844 16:50:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.844 16:50:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.844 16:50:29 -- common/autotest_common.sh@10 -- # set +x 00:03:13.844 ************************************ 00:03:13.844 START TEST env 00:03:13.844 ************************************ 00:03:13.844 16:50:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:13.844 * Looking for test storage... 00:03:13.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:13.844 16:50:29 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:13.844 16:50:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.844 16:50:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.844 16:50:29 -- common/autotest_common.sh@10 -- # set +x 00:03:13.844 ************************************ 00:03:13.844 START TEST env_memory 00:03:13.844 ************************************ 00:03:13.844 16:50:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:13.844 00:03:13.844 00:03:13.844 CUnit - A unit testing framework for C - Version 2.1-3 00:03:13.844 http://cunit.sourceforge.net/ 00:03:13.844 00:03:13.844 00:03:13.844 Suite: memory 00:03:13.844 Test: alloc and free memory map ...[2024-04-18 16:50:29.498388] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:13.844 passed 00:03:13.844 Test: mem map translation ...[2024-04-18 16:50:29.520476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:13.844 [2024-04-18 16:50:29.520513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:13.844 [2024-04-18 16:50:29.520579] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:13.844 [2024-04-18 16:50:29.520592] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:14.102 passed 00:03:14.102 Test: mem map registration ...[2024-04-18 16:50:29.567523] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:14.102 [2024-04-18 16:50:29.567545] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:14.102 passed 00:03:14.102 Test: mem map adjacent registrations ...passed 00:03:14.102 00:03:14.102 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.102 suites 1 1 n/a 0 0 00:03:14.102 tests 4 4 4 0 0 00:03:14.102 asserts 152 152 152 0 n/a 00:03:14.102 00:03:14.102 Elapsed time = 0.156 seconds 00:03:14.102 00:03:14.102 real 0m0.164s 00:03:14.102 user 0m0.155s 00:03:14.102 sys 0m0.008s 00:03:14.102 16:50:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:14.102 16:50:29 -- common/autotest_common.sh@10 -- # set +x 00:03:14.102 ************************************ 00:03:14.102 END TEST env_memory 00:03:14.102 ************************************ 00:03:14.102 16:50:29 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:14.102 16:50:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:14.102 16:50:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:14.102 16:50:29 -- common/autotest_common.sh@10 -- # set +x 00:03:14.102 ************************************ 00:03:14.102 START TEST env_vtophys 00:03:14.102 ************************************ 00:03:14.102 16:50:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:14.102 EAL: lib.eal log level changed from notice to debug 00:03:14.102 EAL: Detected lcore 0 as core 0 on socket 0 00:03:14.102 EAL: Detected lcore 1 as core 1 on socket 0 00:03:14.102 EAL: Detected lcore 2 as core 2 on socket 0 00:03:14.102 EAL: Detected lcore 3 as core 3 on socket 0 00:03:14.102 EAL: Detected lcore 4 as core 4 on socket 0 00:03:14.102 EAL: Detected lcore 5 as core 5 on socket 0 00:03:14.102 EAL: Detected lcore 6 as core 8 on socket 0 00:03:14.102 EAL: Detected lcore 7 as core 9 on socket 0 00:03:14.102 EAL: Detected lcore 8 as core 10 on socket 0 00:03:14.102 EAL: Detected lcore 9 as core 11 on socket 0 00:03:14.102 EAL: Detected lcore 10 as core 12 on socket 0 00:03:14.102 EAL: Detected lcore 11 as core 13 on socket 0 00:03:14.102 EAL: Detected lcore 12 as core 0 on socket 1 00:03:14.102 EAL: Detected lcore 13 as core 1 on socket 1 00:03:14.102 EAL: Detected lcore 14 as core 2 on socket 1 00:03:14.102 EAL: Detected lcore 15 as core 3 on socket 1 00:03:14.102 EAL: Detected lcore 16 as core 4 on socket 1 00:03:14.102 EAL: Detected lcore 17 as core 5 on socket 1 00:03:14.102 EAL: Detected lcore 18 as core 8 on socket 1 00:03:14.102 EAL: Detected lcore 19 as core 9 on socket 1 00:03:14.102 EAL: Detected lcore 20 as core 10 on socket 1 00:03:14.102 EAL: Detected lcore 21 as core 11 on socket 1 00:03:14.102 EAL: Detected lcore 22 as core 12 on socket 1 00:03:14.102 EAL: Detected lcore 23 as core 13 on socket 1 00:03:14.102 EAL: Detected lcore 24 as core 0 on socket 0 00:03:14.102 EAL: Detected lcore 25 as core 1 on socket 0 00:03:14.102 EAL: Detected lcore 26 as core 2 on socket 0 00:03:14.102 EAL: Detected lcore 27 as core 3 on socket 0 00:03:14.102 EAL: Detected lcore 28 as core 4 on socket 0 00:03:14.102 EAL: Detected lcore 29 as core 5 on socket 0 00:03:14.102 EAL: Detected lcore 30 as core 8 on socket 0 00:03:14.103 EAL: Detected lcore 31 as core 9 on socket 0 00:03:14.103 EAL: Detected lcore 32 as core 10 on socket 0 00:03:14.103 EAL: Detected lcore 33 as core 11 on socket 0 00:03:14.103 EAL: Detected lcore 34 as core 12 on socket 0 00:03:14.103 EAL: Detected lcore 35 as core 13 on socket 0 00:03:14.103 EAL: Detected lcore 36 as core 0 on socket 1 00:03:14.103 EAL: Detected lcore 37 as core 1 on socket 1 00:03:14.103 EAL: Detected lcore 38 as core 2 on socket 1 00:03:14.103 EAL: Detected lcore 39 as core 3 on socket 1 00:03:14.103 EAL: Detected lcore 40 as core 4 on socket 1 00:03:14.103 EAL: Detected lcore 41 as core 5 on socket 1 00:03:14.103 EAL: Detected lcore 42 as core 8 on socket 1 00:03:14.103 EAL: Detected lcore 43 as core 9 on socket 1 00:03:14.103 EAL: Detected lcore 44 as core 10 on socket 1 00:03:14.103 EAL: Detected lcore 45 as core 11 on socket 1 00:03:14.103 EAL: Detected lcore 46 as core 12 on socket 1 00:03:14.103 EAL: Detected lcore 47 as core 13 on socket 1 00:03:14.103 EAL: Maximum logical cores by configuration: 128 00:03:14.103 EAL: Detected CPU lcores: 48 00:03:14.103 EAL: Detected NUMA nodes: 2 00:03:14.103 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:14.103 EAL: Detected shared linkage of DPDK 00:03:14.103 EAL: No shared files mode enabled, IPC will be disabled 00:03:14.103 EAL: Bus pci wants IOVA as 'DC' 00:03:14.103 EAL: Buses did not request a specific IOVA mode. 00:03:14.103 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:14.103 EAL: Selected IOVA mode 'VA' 00:03:14.103 EAL: No free 2048 kB hugepages reported on node 1 00:03:14.103 EAL: Probing VFIO support... 00:03:14.103 EAL: IOMMU type 1 (Type 1) is supported 00:03:14.103 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:14.103 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:14.103 EAL: VFIO support initialized 00:03:14.103 EAL: Ask a virtual area of 0x2e000 bytes 00:03:14.103 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:14.103 EAL: Setting up physically contiguous memory... 00:03:14.103 EAL: Setting maximum number of open files to 524288 00:03:14.103 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:14.103 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:14.103 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:14.103 EAL: Ask a virtual area of 0x61000 bytes 00:03:14.103 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:14.103 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:14.103 EAL: Ask a virtual area of 0x400000000 bytes 00:03:14.103 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:14.103 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:14.103 EAL: Ask a virtual area of 0x61000 bytes 00:03:14.103 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:14.103 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:14.103 EAL: Ask a virtual area of 0x400000000 bytes 00:03:14.103 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:14.103 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:14.103 EAL: Ask a virtual area of 0x61000 bytes 00:03:14.103 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:14.103 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:14.103 EAL: Ask a virtual area of 0x400000000 bytes 00:03:14.103 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:14.103 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:14.103 EAL: Ask a virtual area of 0x61000 bytes 00:03:14.103 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:14.103 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:14.103 EAL: Ask a virtual area of 0x400000000 bytes 00:03:14.103 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:14.103 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:14.103 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:14.103 EAL: Ask a virtual area of 0x61000 bytes 00:03:14.103 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:14.103 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:14.103 EAL: Ask a virtual area of 0x400000000 bytes 00:03:14.103 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:14.103 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:14.103 EAL: Ask a virtual area of 0x61000 bytes 00:03:14.103 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:14.103 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:14.103 EAL: Ask a virtual area of 0x400000000 bytes 00:03:14.103 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:14.103 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:14.103 EAL: Ask a virtual area of 0x61000 bytes 00:03:14.103 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:14.103 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:14.103 EAL: Ask a virtual area of 0x400000000 bytes 00:03:14.103 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:14.103 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:14.103 EAL: Ask a virtual area of 0x61000 bytes 00:03:14.103 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:14.103 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:14.103 EAL: Ask a virtual area of 0x400000000 bytes 00:03:14.103 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:14.103 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:14.103 EAL: Hugepages will be freed exactly as allocated. 00:03:14.103 EAL: No shared files mode enabled, IPC is disabled 00:03:14.103 EAL: No shared files mode enabled, IPC is disabled 00:03:14.103 EAL: TSC frequency is ~2700000 KHz 00:03:14.103 EAL: Main lcore 0 is ready (tid=7f8033461a00;cpuset=[0]) 00:03:14.103 EAL: Trying to obtain current memory policy. 00:03:14.103 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:14.103 EAL: Restoring previous memory policy: 0 00:03:14.103 EAL: request: mp_malloc_sync 00:03:14.103 EAL: No shared files mode enabled, IPC is disabled 00:03:14.103 EAL: Heap on socket 0 was expanded by 2MB 00:03:14.103 EAL: No shared files mode enabled, IPC is disabled 00:03:14.103 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:14.103 EAL: Mem event callback 'spdk:(nil)' registered 00:03:14.361 00:03:14.361 00:03:14.361 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.361 http://cunit.sourceforge.net/ 00:03:14.361 00:03:14.361 00:03:14.361 Suite: components_suite 00:03:14.361 Test: vtophys_malloc_test ...passed 00:03:14.361 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:14.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:14.361 EAL: Restoring previous memory policy: 4 00:03:14.361 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.361 EAL: request: mp_malloc_sync 00:03:14.361 EAL: No shared files mode enabled, IPC is disabled 00:03:14.361 EAL: Heap on socket 0 was expanded by 4MB 00:03:14.361 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.361 EAL: request: mp_malloc_sync 00:03:14.361 EAL: No shared files mode enabled, IPC is disabled 00:03:14.361 EAL: Heap on socket 0 was shrunk by 4MB 00:03:14.361 EAL: Trying to obtain current memory policy. 00:03:14.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:14.361 EAL: Restoring previous memory policy: 4 00:03:14.361 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.361 EAL: request: mp_malloc_sync 00:03:14.361 EAL: No shared files mode enabled, IPC is disabled 00:03:14.361 EAL: Heap on socket 0 was expanded by 6MB 00:03:14.361 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.361 EAL: request: mp_malloc_sync 00:03:14.361 EAL: No shared files mode enabled, IPC is disabled 00:03:14.361 EAL: Heap on socket 0 was shrunk by 6MB 00:03:14.361 EAL: Trying to obtain current memory policy. 00:03:14.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:14.361 EAL: Restoring previous memory policy: 4 00:03:14.361 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.361 EAL: request: mp_malloc_sync 00:03:14.361 EAL: No shared files mode enabled, IPC is disabled 00:03:14.361 EAL: Heap on socket 0 was expanded by 10MB 00:03:14.361 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.361 EAL: request: mp_malloc_sync 00:03:14.361 EAL: No shared files mode enabled, IPC is disabled 00:03:14.361 EAL: Heap on socket 0 was shrunk by 10MB 00:03:14.361 EAL: Trying to obtain current memory policy. 00:03:14.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:14.361 EAL: Restoring previous memory policy: 4 00:03:14.361 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.361 EAL: request: mp_malloc_sync 00:03:14.361 EAL: No shared files mode enabled, IPC is disabled 00:03:14.361 EAL: Heap on socket 0 was expanded by 18MB 00:03:14.361 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.361 EAL: request: mp_malloc_sync 00:03:14.361 EAL: No shared files mode enabled, IPC is disabled 00:03:14.361 EAL: Heap on socket 0 was shrunk by 18MB 00:03:14.361 EAL: Trying to obtain current memory policy. 00:03:14.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:14.361 EAL: Restoring previous memory policy: 4 00:03:14.361 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.361 EAL: request: mp_malloc_sync 00:03:14.362 EAL: No shared files mode enabled, IPC is disabled 00:03:14.362 EAL: Heap on socket 0 was expanded by 34MB 00:03:14.362 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.362 EAL: request: mp_malloc_sync 00:03:14.362 EAL: No shared files mode enabled, IPC is disabled 00:03:14.362 EAL: Heap on socket 0 was shrunk by 34MB 00:03:14.362 EAL: Trying to obtain current memory policy. 00:03:14.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:14.362 EAL: Restoring previous memory policy: 4 00:03:14.362 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.362 EAL: request: mp_malloc_sync 00:03:14.362 EAL: No shared files mode enabled, IPC is disabled 00:03:14.362 EAL: Heap on socket 0 was expanded by 66MB 00:03:14.362 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.362 EAL: request: mp_malloc_sync 00:03:14.362 EAL: No shared files mode enabled, IPC is disabled 00:03:14.362 EAL: Heap on socket 0 was shrunk by 66MB 00:03:14.362 EAL: Trying to obtain current memory policy. 00:03:14.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:14.362 EAL: Restoring previous memory policy: 4 00:03:14.362 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.362 EAL: request: mp_malloc_sync 00:03:14.362 EAL: No shared files mode enabled, IPC is disabled 00:03:14.362 EAL: Heap on socket 0 was expanded by 130MB 00:03:14.362 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.362 EAL: request: mp_malloc_sync 00:03:14.362 EAL: No shared files mode enabled, IPC is disabled 00:03:14.362 EAL: Heap on socket 0 was shrunk by 130MB 00:03:14.362 EAL: Trying to obtain current memory policy. 00:03:14.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:14.362 EAL: Restoring previous memory policy: 4 00:03:14.362 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.362 EAL: request: mp_malloc_sync 00:03:14.362 EAL: No shared files mode enabled, IPC is disabled 00:03:14.362 EAL: Heap on socket 0 was expanded by 258MB 00:03:14.619 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.619 EAL: request: mp_malloc_sync 00:03:14.619 EAL: No shared files mode enabled, IPC is disabled 00:03:14.619 EAL: Heap on socket 0 was shrunk by 258MB 00:03:14.619 EAL: Trying to obtain current memory policy. 00:03:14.619 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:14.619 EAL: Restoring previous memory policy: 4 00:03:14.619 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.619 EAL: request: mp_malloc_sync 00:03:14.619 EAL: No shared files mode enabled, IPC is disabled 00:03:14.619 EAL: Heap on socket 0 was expanded by 514MB 00:03:14.877 EAL: Calling mem event callback 'spdk:(nil)' 00:03:14.877 EAL: request: mp_malloc_sync 00:03:14.877 EAL: No shared files mode enabled, IPC is disabled 00:03:14.878 EAL: Heap on socket 0 was shrunk by 514MB 00:03:14.878 EAL: Trying to obtain current memory policy. 00:03:14.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.135 EAL: Restoring previous memory policy: 4 00:03:15.135 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.135 EAL: request: mp_malloc_sync 00:03:15.135 EAL: No shared files mode enabled, IPC is disabled 00:03:15.135 EAL: Heap on socket 0 was expanded by 1026MB 00:03:15.394 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.652 EAL: request: mp_malloc_sync 00:03:15.652 EAL: No shared files mode enabled, IPC is disabled 00:03:15.652 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:15.652 passed 00:03:15.652 00:03:15.652 Run Summary: Type Total Ran Passed Failed Inactive 00:03:15.652 suites 1 1 n/a 0 0 00:03:15.652 tests 2 2 2 0 0 00:03:15.652 asserts 497 497 497 0 n/a 00:03:15.652 00:03:15.652 Elapsed time = 1.401 seconds 00:03:15.652 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.652 EAL: request: mp_malloc_sync 00:03:15.652 EAL: No shared files mode enabled, IPC is disabled 00:03:15.652 EAL: Heap on socket 0 was shrunk by 2MB 00:03:15.652 EAL: No shared files mode enabled, IPC is disabled 00:03:15.652 EAL: No shared files mode enabled, IPC is disabled 00:03:15.652 EAL: No shared files mode enabled, IPC is disabled 00:03:15.652 00:03:15.652 real 0m1.527s 00:03:15.652 user 0m0.876s 00:03:15.652 sys 0m0.612s 00:03:15.653 16:50:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:15.653 16:50:31 -- common/autotest_common.sh@10 -- # set +x 00:03:15.653 ************************************ 00:03:15.653 END TEST env_vtophys 00:03:15.653 ************************************ 00:03:15.653 16:50:31 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:15.653 16:50:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.653 16:50:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.653 16:50:31 -- common/autotest_common.sh@10 -- # set +x 00:03:15.911 ************************************ 00:03:15.911 START TEST env_pci 00:03:15.911 ************************************ 00:03:15.911 16:50:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:15.911 00:03:15.911 00:03:15.911 CUnit - A unit testing framework for C - Version 2.1-3 00:03:15.911 http://cunit.sourceforge.net/ 00:03:15.911 00:03:15.911 00:03:15.911 Suite: pci 00:03:15.911 Test: pci_hook ...[2024-04-18 16:50:31.403760] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1564925 has claimed it 00:03:15.911 EAL: Cannot find device (10000:00:01.0) 00:03:15.911 EAL: Failed to attach device on primary process 00:03:15.911 passed 00:03:15.911 00:03:15.911 Run Summary: Type Total Ran Passed Failed Inactive 00:03:15.911 suites 1 1 n/a 0 0 00:03:15.911 tests 1 1 1 0 0 00:03:15.911 asserts 25 25 25 0 n/a 00:03:15.911 00:03:15.911 Elapsed time = 0.022 seconds 00:03:15.911 00:03:15.911 real 0m0.035s 00:03:15.911 user 0m0.011s 00:03:15.911 sys 0m0.024s 00:03:15.911 16:50:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:15.911 16:50:31 -- common/autotest_common.sh@10 -- # set +x 00:03:15.911 ************************************ 00:03:15.911 END TEST env_pci 00:03:15.911 ************************************ 00:03:15.911 16:50:31 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:15.911 16:50:31 -- env/env.sh@15 -- # uname 00:03:15.911 16:50:31 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:15.911 16:50:31 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:15.911 16:50:31 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:15.911 16:50:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:15.911 16:50:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.911 16:50:31 -- common/autotest_common.sh@10 -- # set +x 00:03:15.911 ************************************ 00:03:15.911 START TEST env_dpdk_post_init 00:03:15.911 ************************************ 00:03:15.911 16:50:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:15.911 EAL: Detected CPU lcores: 48 00:03:15.911 EAL: Detected NUMA nodes: 2 00:03:15.911 EAL: Detected shared linkage of DPDK 00:03:15.911 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:15.911 EAL: Selected IOVA mode 'VA' 00:03:15.911 EAL: No free 2048 kB hugepages reported on node 1 00:03:15.911 EAL: VFIO support initialized 00:03:15.911 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:16.169 EAL: Using IOMMU type 1 (Type 1) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:16.169 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:17.105 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:20.431 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:20.431 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:20.431 Starting DPDK initialization... 00:03:20.431 Starting SPDK post initialization... 00:03:20.431 SPDK NVMe probe 00:03:20.431 Attaching to 0000:88:00.0 00:03:20.431 Attached to 0000:88:00.0 00:03:20.431 Cleaning up... 00:03:20.431 00:03:20.431 real 0m4.380s 00:03:20.431 user 0m3.239s 00:03:20.431 sys 0m0.196s 00:03:20.431 16:50:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:20.431 16:50:35 -- common/autotest_common.sh@10 -- # set +x 00:03:20.431 ************************************ 00:03:20.431 END TEST env_dpdk_post_init 00:03:20.431 ************************************ 00:03:20.431 16:50:35 -- env/env.sh@26 -- # uname 00:03:20.431 16:50:35 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:20.431 16:50:35 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:20.431 16:50:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.431 16:50:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.431 16:50:35 -- common/autotest_common.sh@10 -- # set +x 00:03:20.431 ************************************ 00:03:20.431 START TEST env_mem_callbacks 00:03:20.431 ************************************ 00:03:20.431 16:50:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:20.431 EAL: Detected CPU lcores: 48 00:03:20.431 EAL: Detected NUMA nodes: 2 00:03:20.431 EAL: Detected shared linkage of DPDK 00:03:20.431 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:20.431 EAL: Selected IOVA mode 'VA' 00:03:20.431 EAL: No free 2048 kB hugepages reported on node 1 00:03:20.431 EAL: VFIO support initialized 00:03:20.431 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:20.431 00:03:20.431 00:03:20.431 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.431 http://cunit.sourceforge.net/ 00:03:20.431 00:03:20.431 00:03:20.431 Suite: memory 00:03:20.431 Test: test ... 00:03:20.431 register 0x200000200000 2097152 00:03:20.431 malloc 3145728 00:03:20.431 register 0x200000400000 4194304 00:03:20.431 buf 0x200000500000 len 3145728 PASSED 00:03:20.431 malloc 64 00:03:20.431 buf 0x2000004fff40 len 64 PASSED 00:03:20.431 malloc 4194304 00:03:20.431 register 0x200000800000 6291456 00:03:20.431 buf 0x200000a00000 len 4194304 PASSED 00:03:20.431 free 0x200000500000 3145728 00:03:20.431 free 0x2000004fff40 64 00:03:20.431 unregister 0x200000400000 4194304 PASSED 00:03:20.431 free 0x200000a00000 4194304 00:03:20.431 unregister 0x200000800000 6291456 PASSED 00:03:20.431 malloc 8388608 00:03:20.431 register 0x200000400000 10485760 00:03:20.431 buf 0x200000600000 len 8388608 PASSED 00:03:20.431 free 0x200000600000 8388608 00:03:20.431 unregister 0x200000400000 10485760 PASSED 00:03:20.431 passed 00:03:20.431 00:03:20.431 Run Summary: Type Total Ran Passed Failed Inactive 00:03:20.431 suites 1 1 n/a 0 0 00:03:20.431 tests 1 1 1 0 0 00:03:20.431 asserts 15 15 15 0 n/a 00:03:20.431 00:03:20.431 Elapsed time = 0.005 seconds 00:03:20.431 00:03:20.431 real 0m0.047s 00:03:20.431 user 0m0.014s 00:03:20.431 sys 0m0.033s 00:03:20.431 16:50:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:20.431 16:50:36 -- common/autotest_common.sh@10 -- # set +x 00:03:20.431 ************************************ 00:03:20.431 END TEST env_mem_callbacks 00:03:20.431 ************************************ 00:03:20.431 00:03:20.431 real 0m6.798s 00:03:20.431 user 0m4.546s 00:03:20.431 sys 0m1.231s 00:03:20.431 16:50:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:20.431 16:50:36 -- common/autotest_common.sh@10 -- # set +x 00:03:20.431 ************************************ 00:03:20.431 END TEST env 00:03:20.431 ************************************ 00:03:20.689 16:50:36 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:20.689 16:50:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.689 16:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.689 16:50:36 -- common/autotest_common.sh@10 -- # set +x 00:03:20.689 ************************************ 00:03:20.689 START TEST rpc 00:03:20.689 ************************************ 00:03:20.689 16:50:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:20.689 * Looking for test storage... 00:03:20.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:20.689 16:50:36 -- rpc/rpc.sh@65 -- # spdk_pid=1565652 00:03:20.689 16:50:36 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:20.690 16:50:36 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:20.690 16:50:36 -- rpc/rpc.sh@67 -- # waitforlisten 1565652 00:03:20.690 16:50:36 -- common/autotest_common.sh@817 -- # '[' -z 1565652 ']' 00:03:20.690 16:50:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:20.690 16:50:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:20.690 16:50:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:20.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:20.690 16:50:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:20.690 16:50:36 -- common/autotest_common.sh@10 -- # set +x 00:03:20.690 [2024-04-18 16:50:36.343298] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:03:20.690 [2024-04-18 16:50:36.343417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1565652 ] 00:03:20.690 EAL: No free 2048 kB hugepages reported on node 1 00:03:20.948 [2024-04-18 16:50:36.401976] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:20.948 [2024-04-18 16:50:36.506275] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:20.948 [2024-04-18 16:50:36.506343] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1565652' to capture a snapshot of events at runtime. 00:03:20.948 [2024-04-18 16:50:36.506371] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:20.948 [2024-04-18 16:50:36.506390] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:20.948 [2024-04-18 16:50:36.506402] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1565652 for offline analysis/debug. 00:03:20.948 [2024-04-18 16:50:36.506452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:21.206 16:50:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:21.206 16:50:36 -- common/autotest_common.sh@850 -- # return 0 00:03:21.206 16:50:36 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:21.206 16:50:36 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:21.206 16:50:36 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:21.206 16:50:36 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:21.206 16:50:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.206 16:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.206 16:50:36 -- common/autotest_common.sh@10 -- # set +x 00:03:21.206 ************************************ 00:03:21.206 START TEST rpc_integrity 00:03:21.206 ************************************ 00:03:21.206 16:50:36 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:21.206 16:50:36 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:21.206 16:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.206 16:50:36 -- common/autotest_common.sh@10 -- # set +x 00:03:21.206 16:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.206 16:50:36 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:21.206 16:50:36 -- rpc/rpc.sh@13 -- # jq length 00:03:21.206 16:50:36 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:21.206 16:50:36 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:21.206 16:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.206 16:50:36 -- common/autotest_common.sh@10 -- # set +x 00:03:21.464 16:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.464 16:50:36 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:21.464 16:50:36 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:21.465 16:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.465 16:50:36 -- common/autotest_common.sh@10 -- # set +x 00:03:21.465 16:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.465 16:50:36 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:21.465 { 00:03:21.465 "name": "Malloc0", 00:03:21.465 "aliases": [ 00:03:21.465 "538e469b-f67b-4fba-8498-3ee076fbb930" 00:03:21.465 ], 00:03:21.465 "product_name": "Malloc disk", 00:03:21.465 "block_size": 512, 00:03:21.465 "num_blocks": 16384, 00:03:21.465 "uuid": "538e469b-f67b-4fba-8498-3ee076fbb930", 00:03:21.465 "assigned_rate_limits": { 00:03:21.465 "rw_ios_per_sec": 0, 00:03:21.465 "rw_mbytes_per_sec": 0, 00:03:21.465 "r_mbytes_per_sec": 0, 00:03:21.465 "w_mbytes_per_sec": 0 00:03:21.465 }, 00:03:21.465 "claimed": false, 00:03:21.465 "zoned": false, 00:03:21.465 "supported_io_types": { 00:03:21.465 "read": true, 00:03:21.465 "write": true, 00:03:21.465 "unmap": true, 00:03:21.465 "write_zeroes": true, 00:03:21.465 "flush": true, 00:03:21.465 "reset": true, 00:03:21.465 "compare": false, 00:03:21.465 "compare_and_write": false, 00:03:21.465 "abort": true, 00:03:21.465 "nvme_admin": false, 00:03:21.465 "nvme_io": false 00:03:21.465 }, 00:03:21.465 "memory_domains": [ 00:03:21.465 { 00:03:21.465 "dma_device_id": "system", 00:03:21.465 "dma_device_type": 1 00:03:21.465 }, 00:03:21.465 { 00:03:21.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:21.465 "dma_device_type": 2 00:03:21.465 } 00:03:21.465 ], 00:03:21.465 "driver_specific": {} 00:03:21.465 } 00:03:21.465 ]' 00:03:21.465 16:50:36 -- rpc/rpc.sh@17 -- # jq length 00:03:21.465 16:50:36 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:21.465 16:50:36 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:21.465 16:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.465 16:50:36 -- common/autotest_common.sh@10 -- # set +x 00:03:21.465 [2024-04-18 16:50:36.971377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:21.465 [2024-04-18 16:50:36.971433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:21.465 [2024-04-18 16:50:36.971471] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1561d40 00:03:21.465 [2024-04-18 16:50:36.971485] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:21.465 [2024-04-18 16:50:36.973010] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:21.465 [2024-04-18 16:50:36.973038] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:21.465 Passthru0 00:03:21.465 16:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.465 16:50:36 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:21.465 16:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.465 16:50:36 -- common/autotest_common.sh@10 -- # set +x 00:03:21.465 16:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.465 16:50:36 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:21.465 { 00:03:21.465 "name": "Malloc0", 00:03:21.465 "aliases": [ 00:03:21.465 "538e469b-f67b-4fba-8498-3ee076fbb930" 00:03:21.465 ], 00:03:21.465 "product_name": "Malloc disk", 00:03:21.465 "block_size": 512, 00:03:21.465 "num_blocks": 16384, 00:03:21.465 "uuid": "538e469b-f67b-4fba-8498-3ee076fbb930", 00:03:21.465 "assigned_rate_limits": { 00:03:21.465 "rw_ios_per_sec": 0, 00:03:21.465 "rw_mbytes_per_sec": 0, 00:03:21.465 "r_mbytes_per_sec": 0, 00:03:21.465 "w_mbytes_per_sec": 0 00:03:21.465 }, 00:03:21.465 "claimed": true, 00:03:21.465 "claim_type": "exclusive_write", 00:03:21.465 "zoned": false, 00:03:21.465 "supported_io_types": { 00:03:21.465 "read": true, 00:03:21.465 "write": true, 00:03:21.465 "unmap": true, 00:03:21.465 "write_zeroes": true, 00:03:21.465 "flush": true, 00:03:21.465 "reset": true, 00:03:21.465 "compare": false, 00:03:21.465 "compare_and_write": false, 00:03:21.465 "abort": true, 00:03:21.465 "nvme_admin": false, 00:03:21.465 "nvme_io": false 00:03:21.465 }, 00:03:21.465 "memory_domains": [ 00:03:21.465 { 00:03:21.465 "dma_device_id": "system", 00:03:21.465 "dma_device_type": 1 00:03:21.465 }, 00:03:21.465 { 00:03:21.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:21.465 "dma_device_type": 2 00:03:21.465 } 00:03:21.465 ], 00:03:21.465 "driver_specific": {} 00:03:21.465 }, 00:03:21.465 { 00:03:21.465 "name": "Passthru0", 00:03:21.465 "aliases": [ 00:03:21.465 "8bbe836e-52af-5050-bb65-9b70966e4570" 00:03:21.465 ], 00:03:21.465 "product_name": "passthru", 00:03:21.465 "block_size": 512, 00:03:21.465 "num_blocks": 16384, 00:03:21.465 "uuid": "8bbe836e-52af-5050-bb65-9b70966e4570", 00:03:21.465 "assigned_rate_limits": { 00:03:21.465 "rw_ios_per_sec": 0, 00:03:21.465 "rw_mbytes_per_sec": 0, 00:03:21.465 "r_mbytes_per_sec": 0, 00:03:21.465 "w_mbytes_per_sec": 0 00:03:21.465 }, 00:03:21.465 "claimed": false, 00:03:21.465 "zoned": false, 00:03:21.465 "supported_io_types": { 00:03:21.465 "read": true, 00:03:21.465 "write": true, 00:03:21.465 "unmap": true, 00:03:21.465 "write_zeroes": true, 00:03:21.465 "flush": true, 00:03:21.465 "reset": true, 00:03:21.465 "compare": false, 00:03:21.465 "compare_and_write": false, 00:03:21.465 "abort": true, 00:03:21.465 "nvme_admin": false, 00:03:21.465 "nvme_io": false 00:03:21.465 }, 00:03:21.465 "memory_domains": [ 00:03:21.465 { 00:03:21.465 "dma_device_id": "system", 00:03:21.465 "dma_device_type": 1 00:03:21.465 }, 00:03:21.465 { 00:03:21.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:21.465 "dma_device_type": 2 00:03:21.465 } 00:03:21.465 ], 00:03:21.465 "driver_specific": { 00:03:21.465 "passthru": { 00:03:21.465 "name": "Passthru0", 00:03:21.465 "base_bdev_name": "Malloc0" 00:03:21.465 } 00:03:21.465 } 00:03:21.465 } 00:03:21.465 ]' 00:03:21.465 16:50:36 -- rpc/rpc.sh@21 -- # jq length 00:03:21.465 16:50:37 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:21.465 16:50:37 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:21.465 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.465 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.465 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.465 16:50:37 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:21.465 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.465 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.465 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.465 16:50:37 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:21.465 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.465 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.465 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.465 16:50:37 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:21.465 16:50:37 -- rpc/rpc.sh@26 -- # jq length 00:03:21.465 16:50:37 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:21.465 00:03:21.465 real 0m0.233s 00:03:21.465 user 0m0.147s 00:03:21.465 sys 0m0.023s 00:03:21.465 16:50:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:21.465 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.465 ************************************ 00:03:21.465 END TEST rpc_integrity 00:03:21.465 ************************************ 00:03:21.465 16:50:37 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:21.465 16:50:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.465 16:50:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.465 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.723 ************************************ 00:03:21.723 START TEST rpc_plugins 00:03:21.723 ************************************ 00:03:21.723 16:50:37 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:03:21.723 16:50:37 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:21.723 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.723 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.723 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.723 16:50:37 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:21.723 16:50:37 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:21.723 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.723 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.723 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.723 16:50:37 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:21.723 { 00:03:21.723 "name": "Malloc1", 00:03:21.723 "aliases": [ 00:03:21.723 "fb49e415-8ce5-42cc-bca5-abcda994a6dc" 00:03:21.723 ], 00:03:21.723 "product_name": "Malloc disk", 00:03:21.723 "block_size": 4096, 00:03:21.723 "num_blocks": 256, 00:03:21.723 "uuid": "fb49e415-8ce5-42cc-bca5-abcda994a6dc", 00:03:21.723 "assigned_rate_limits": { 00:03:21.723 "rw_ios_per_sec": 0, 00:03:21.723 "rw_mbytes_per_sec": 0, 00:03:21.723 "r_mbytes_per_sec": 0, 00:03:21.723 "w_mbytes_per_sec": 0 00:03:21.723 }, 00:03:21.723 "claimed": false, 00:03:21.723 "zoned": false, 00:03:21.723 "supported_io_types": { 00:03:21.723 "read": true, 00:03:21.723 "write": true, 00:03:21.723 "unmap": true, 00:03:21.723 "write_zeroes": true, 00:03:21.723 "flush": true, 00:03:21.723 "reset": true, 00:03:21.723 "compare": false, 00:03:21.723 "compare_and_write": false, 00:03:21.723 "abort": true, 00:03:21.723 "nvme_admin": false, 00:03:21.723 "nvme_io": false 00:03:21.723 }, 00:03:21.723 "memory_domains": [ 00:03:21.723 { 00:03:21.723 "dma_device_id": "system", 00:03:21.723 "dma_device_type": 1 00:03:21.723 }, 00:03:21.723 { 00:03:21.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:21.723 "dma_device_type": 2 00:03:21.723 } 00:03:21.723 ], 00:03:21.723 "driver_specific": {} 00:03:21.723 } 00:03:21.723 ]' 00:03:21.723 16:50:37 -- rpc/rpc.sh@32 -- # jq length 00:03:21.723 16:50:37 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:21.723 16:50:37 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:21.723 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.723 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.723 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.723 16:50:37 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:21.723 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.723 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.723 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.723 16:50:37 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:21.723 16:50:37 -- rpc/rpc.sh@36 -- # jq length 00:03:21.723 16:50:37 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:21.723 00:03:21.723 real 0m0.116s 00:03:21.723 user 0m0.073s 00:03:21.723 sys 0m0.011s 00:03:21.723 16:50:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:21.723 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.723 ************************************ 00:03:21.723 END TEST rpc_plugins 00:03:21.723 ************************************ 00:03:21.723 16:50:37 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:21.723 16:50:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.723 16:50:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.723 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.981 ************************************ 00:03:21.981 START TEST rpc_trace_cmd_test 00:03:21.981 ************************************ 00:03:21.981 16:50:37 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:03:21.981 16:50:37 -- rpc/rpc.sh@40 -- # local info 00:03:21.981 16:50:37 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:21.981 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:21.981 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.981 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:21.981 16:50:37 -- rpc/rpc.sh@42 -- # info='{ 00:03:21.981 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1565652", 00:03:21.981 "tpoint_group_mask": "0x8", 00:03:21.981 "iscsi_conn": { 00:03:21.981 "mask": "0x2", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 }, 00:03:21.981 "scsi": { 00:03:21.981 "mask": "0x4", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 }, 00:03:21.981 "bdev": { 00:03:21.981 "mask": "0x8", 00:03:21.981 "tpoint_mask": "0xffffffffffffffff" 00:03:21.981 }, 00:03:21.981 "nvmf_rdma": { 00:03:21.981 "mask": "0x10", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 }, 00:03:21.981 "nvmf_tcp": { 00:03:21.981 "mask": "0x20", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 }, 00:03:21.981 "ftl": { 00:03:21.981 "mask": "0x40", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 }, 00:03:21.981 "blobfs": { 00:03:21.981 "mask": "0x80", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 }, 00:03:21.981 "dsa": { 00:03:21.981 "mask": "0x200", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 }, 00:03:21.981 "thread": { 00:03:21.981 "mask": "0x400", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 }, 00:03:21.981 "nvme_pcie": { 00:03:21.981 "mask": "0x800", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 }, 00:03:21.981 "iaa": { 00:03:21.981 "mask": "0x1000", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 }, 00:03:21.981 "nvme_tcp": { 00:03:21.981 "mask": "0x2000", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 }, 00:03:21.981 "bdev_nvme": { 00:03:21.981 "mask": "0x4000", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 }, 00:03:21.981 "sock": { 00:03:21.981 "mask": "0x8000", 00:03:21.981 "tpoint_mask": "0x0" 00:03:21.981 } 00:03:21.981 }' 00:03:21.981 16:50:37 -- rpc/rpc.sh@43 -- # jq length 00:03:21.981 16:50:37 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:21.981 16:50:37 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:21.981 16:50:37 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:21.981 16:50:37 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:21.981 16:50:37 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:21.981 16:50:37 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:21.981 16:50:37 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:21.981 16:50:37 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:21.981 16:50:37 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:21.981 00:03:21.981 real 0m0.195s 00:03:21.981 user 0m0.176s 00:03:21.981 sys 0m0.013s 00:03:21.981 16:50:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:21.981 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:21.981 ************************************ 00:03:21.981 END TEST rpc_trace_cmd_test 00:03:21.981 ************************************ 00:03:21.981 16:50:37 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:21.981 16:50:37 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:21.981 16:50:37 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:21.981 16:50:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.981 16:50:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.981 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:22.240 ************************************ 00:03:22.240 START TEST rpc_daemon_integrity 00:03:22.240 ************************************ 00:03:22.240 16:50:37 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:22.240 16:50:37 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:22.240 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:22.240 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:22.240 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:22.240 16:50:37 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:22.240 16:50:37 -- rpc/rpc.sh@13 -- # jq length 00:03:22.240 16:50:37 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:22.240 16:50:37 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:22.240 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:22.240 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:22.240 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:22.240 16:50:37 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:22.240 16:50:37 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:22.240 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:22.240 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:22.240 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:22.240 16:50:37 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:22.240 { 00:03:22.240 "name": "Malloc2", 00:03:22.240 "aliases": [ 00:03:22.240 "0bc7ee28-f1cc-4234-96e1-99b2ed61ba5d" 00:03:22.240 ], 00:03:22.240 "product_name": "Malloc disk", 00:03:22.240 "block_size": 512, 00:03:22.240 "num_blocks": 16384, 00:03:22.240 "uuid": "0bc7ee28-f1cc-4234-96e1-99b2ed61ba5d", 00:03:22.240 "assigned_rate_limits": { 00:03:22.240 "rw_ios_per_sec": 0, 00:03:22.240 "rw_mbytes_per_sec": 0, 00:03:22.240 "r_mbytes_per_sec": 0, 00:03:22.240 "w_mbytes_per_sec": 0 00:03:22.240 }, 00:03:22.240 "claimed": false, 00:03:22.240 "zoned": false, 00:03:22.240 "supported_io_types": { 00:03:22.240 "read": true, 00:03:22.240 "write": true, 00:03:22.240 "unmap": true, 00:03:22.240 "write_zeroes": true, 00:03:22.240 "flush": true, 00:03:22.240 "reset": true, 00:03:22.240 "compare": false, 00:03:22.240 "compare_and_write": false, 00:03:22.240 "abort": true, 00:03:22.240 "nvme_admin": false, 00:03:22.240 "nvme_io": false 00:03:22.240 }, 00:03:22.240 "memory_domains": [ 00:03:22.240 { 00:03:22.240 "dma_device_id": "system", 00:03:22.240 "dma_device_type": 1 00:03:22.240 }, 00:03:22.240 { 00:03:22.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:22.240 "dma_device_type": 2 00:03:22.240 } 00:03:22.240 ], 00:03:22.240 "driver_specific": {} 00:03:22.240 } 00:03:22.240 ]' 00:03:22.240 16:50:37 -- rpc/rpc.sh@17 -- # jq length 00:03:22.240 16:50:37 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:22.240 16:50:37 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:22.240 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:22.240 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:22.240 [2024-04-18 16:50:37.866814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:22.240 [2024-04-18 16:50:37.866867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:22.240 [2024-04-18 16:50:37.866892] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16f92f0 00:03:22.240 [2024-04-18 16:50:37.866907] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:22.240 [2024-04-18 16:50:37.868234] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:22.240 [2024-04-18 16:50:37.868263] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:22.240 Passthru0 00:03:22.240 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:22.240 16:50:37 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:22.240 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:22.240 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:22.240 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:22.240 16:50:37 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:22.240 { 00:03:22.240 "name": "Malloc2", 00:03:22.240 "aliases": [ 00:03:22.240 "0bc7ee28-f1cc-4234-96e1-99b2ed61ba5d" 00:03:22.240 ], 00:03:22.240 "product_name": "Malloc disk", 00:03:22.240 "block_size": 512, 00:03:22.240 "num_blocks": 16384, 00:03:22.240 "uuid": "0bc7ee28-f1cc-4234-96e1-99b2ed61ba5d", 00:03:22.240 "assigned_rate_limits": { 00:03:22.240 "rw_ios_per_sec": 0, 00:03:22.240 "rw_mbytes_per_sec": 0, 00:03:22.240 "r_mbytes_per_sec": 0, 00:03:22.240 "w_mbytes_per_sec": 0 00:03:22.240 }, 00:03:22.240 "claimed": true, 00:03:22.240 "claim_type": "exclusive_write", 00:03:22.240 "zoned": false, 00:03:22.240 "supported_io_types": { 00:03:22.240 "read": true, 00:03:22.240 "write": true, 00:03:22.240 "unmap": true, 00:03:22.240 "write_zeroes": true, 00:03:22.240 "flush": true, 00:03:22.240 "reset": true, 00:03:22.240 "compare": false, 00:03:22.240 "compare_and_write": false, 00:03:22.240 "abort": true, 00:03:22.240 "nvme_admin": false, 00:03:22.240 "nvme_io": false 00:03:22.240 }, 00:03:22.240 "memory_domains": [ 00:03:22.240 { 00:03:22.240 "dma_device_id": "system", 00:03:22.240 "dma_device_type": 1 00:03:22.240 }, 00:03:22.240 { 00:03:22.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:22.240 "dma_device_type": 2 00:03:22.240 } 00:03:22.240 ], 00:03:22.240 "driver_specific": {} 00:03:22.240 }, 00:03:22.240 { 00:03:22.240 "name": "Passthru0", 00:03:22.240 "aliases": [ 00:03:22.240 "5cdb482e-6d08-5203-acf5-ce0f274f89aa" 00:03:22.240 ], 00:03:22.240 "product_name": "passthru", 00:03:22.240 "block_size": 512, 00:03:22.240 "num_blocks": 16384, 00:03:22.240 "uuid": "5cdb482e-6d08-5203-acf5-ce0f274f89aa", 00:03:22.240 "assigned_rate_limits": { 00:03:22.240 "rw_ios_per_sec": 0, 00:03:22.240 "rw_mbytes_per_sec": 0, 00:03:22.240 "r_mbytes_per_sec": 0, 00:03:22.240 "w_mbytes_per_sec": 0 00:03:22.240 }, 00:03:22.240 "claimed": false, 00:03:22.240 "zoned": false, 00:03:22.240 "supported_io_types": { 00:03:22.240 "read": true, 00:03:22.240 "write": true, 00:03:22.240 "unmap": true, 00:03:22.240 "write_zeroes": true, 00:03:22.240 "flush": true, 00:03:22.240 "reset": true, 00:03:22.240 "compare": false, 00:03:22.240 "compare_and_write": false, 00:03:22.240 "abort": true, 00:03:22.240 "nvme_admin": false, 00:03:22.240 "nvme_io": false 00:03:22.240 }, 00:03:22.240 "memory_domains": [ 00:03:22.240 { 00:03:22.240 "dma_device_id": "system", 00:03:22.240 "dma_device_type": 1 00:03:22.240 }, 00:03:22.240 { 00:03:22.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:22.240 "dma_device_type": 2 00:03:22.240 } 00:03:22.240 ], 00:03:22.240 "driver_specific": { 00:03:22.240 "passthru": { 00:03:22.240 "name": "Passthru0", 00:03:22.240 "base_bdev_name": "Malloc2" 00:03:22.240 } 00:03:22.240 } 00:03:22.240 } 00:03:22.240 ]' 00:03:22.240 16:50:37 -- rpc/rpc.sh@21 -- # jq length 00:03:22.240 16:50:37 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:22.240 16:50:37 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:22.240 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:22.240 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:22.240 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:22.240 16:50:37 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:22.240 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:22.240 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:22.240 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:22.240 16:50:37 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:22.240 16:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:22.240 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:22.498 16:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:22.499 16:50:37 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:22.499 16:50:37 -- rpc/rpc.sh@26 -- # jq length 00:03:22.499 16:50:37 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:22.499 00:03:22.499 real 0m0.223s 00:03:22.499 user 0m0.146s 00:03:22.499 sys 0m0.022s 00:03:22.499 16:50:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:22.499 16:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:22.499 ************************************ 00:03:22.499 END TEST rpc_daemon_integrity 00:03:22.499 ************************************ 00:03:22.499 16:50:38 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:22.499 16:50:38 -- rpc/rpc.sh@84 -- # killprocess 1565652 00:03:22.499 16:50:38 -- common/autotest_common.sh@936 -- # '[' -z 1565652 ']' 00:03:22.499 16:50:38 -- common/autotest_common.sh@940 -- # kill -0 1565652 00:03:22.499 16:50:38 -- common/autotest_common.sh@941 -- # uname 00:03:22.499 16:50:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:22.499 16:50:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1565652 00:03:22.499 16:50:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:22.499 16:50:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:22.499 16:50:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1565652' 00:03:22.499 killing process with pid 1565652 00:03:22.499 16:50:38 -- common/autotest_common.sh@955 -- # kill 1565652 00:03:22.499 16:50:38 -- common/autotest_common.sh@960 -- # wait 1565652 00:03:23.064 00:03:23.064 real 0m2.242s 00:03:23.064 user 0m2.819s 00:03:23.064 sys 0m0.728s 00:03:23.064 16:50:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:23.064 16:50:38 -- common/autotest_common.sh@10 -- # set +x 00:03:23.064 ************************************ 00:03:23.064 END TEST rpc 00:03:23.064 ************************************ 00:03:23.064 16:50:38 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:23.064 16:50:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:23.064 16:50:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.064 16:50:38 -- common/autotest_common.sh@10 -- # set +x 00:03:23.064 ************************************ 00:03:23.064 START TEST skip_rpc 00:03:23.064 ************************************ 00:03:23.064 16:50:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:23.064 * Looking for test storage... 00:03:23.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:23.064 16:50:38 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:23.064 16:50:38 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:23.064 16:50:38 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:23.064 16:50:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:23.064 16:50:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.064 16:50:38 -- common/autotest_common.sh@10 -- # set +x 00:03:23.064 ************************************ 00:03:23.064 START TEST skip_rpc 00:03:23.064 ************************************ 00:03:23.064 16:50:38 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:03:23.064 16:50:38 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1566200 00:03:23.064 16:50:38 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:23.064 16:50:38 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:23.064 16:50:38 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:23.323 [2024-04-18 16:50:38.789549] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:03:23.323 [2024-04-18 16:50:38.789619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566200 ] 00:03:23.323 EAL: No free 2048 kB hugepages reported on node 1 00:03:23.323 [2024-04-18 16:50:38.849355] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:23.323 [2024-04-18 16:50:38.967193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.630 16:50:43 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:28.630 16:50:43 -- common/autotest_common.sh@638 -- # local es=0 00:03:28.630 16:50:43 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:28.630 16:50:43 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:03:28.630 16:50:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:28.630 16:50:43 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:03:28.630 16:50:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:28.630 16:50:43 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:03:28.630 16:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:28.630 16:50:43 -- common/autotest_common.sh@10 -- # set +x 00:03:28.630 16:50:43 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:28.630 16:50:43 -- common/autotest_common.sh@641 -- # es=1 00:03:28.631 16:50:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:28.631 16:50:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:28.631 16:50:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:28.631 16:50:43 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:28.631 16:50:43 -- rpc/skip_rpc.sh@23 -- # killprocess 1566200 00:03:28.631 16:50:43 -- common/autotest_common.sh@936 -- # '[' -z 1566200 ']' 00:03:28.631 16:50:43 -- common/autotest_common.sh@940 -- # kill -0 1566200 00:03:28.631 16:50:43 -- common/autotest_common.sh@941 -- # uname 00:03:28.631 16:50:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:28.631 16:50:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1566200 00:03:28.631 16:50:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:28.631 16:50:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:28.631 16:50:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1566200' 00:03:28.631 killing process with pid 1566200 00:03:28.631 16:50:43 -- common/autotest_common.sh@955 -- # kill 1566200 00:03:28.631 16:50:43 -- common/autotest_common.sh@960 -- # wait 1566200 00:03:28.631 00:03:28.631 real 0m5.484s 00:03:28.631 user 0m5.168s 00:03:28.631 sys 0m0.315s 00:03:28.631 16:50:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:28.631 16:50:44 -- common/autotest_common.sh@10 -- # set +x 00:03:28.631 ************************************ 00:03:28.631 END TEST skip_rpc 00:03:28.631 ************************************ 00:03:28.631 16:50:44 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:28.631 16:50:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.631 16:50:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.631 16:50:44 -- common/autotest_common.sh@10 -- # set +x 00:03:28.889 ************************************ 00:03:28.889 START TEST skip_rpc_with_json 00:03:28.889 ************************************ 00:03:28.889 16:50:44 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:03:28.889 16:50:44 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:28.889 16:50:44 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1566897 00:03:28.889 16:50:44 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:28.889 16:50:44 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:28.889 16:50:44 -- rpc/skip_rpc.sh@31 -- # waitforlisten 1566897 00:03:28.889 16:50:44 -- common/autotest_common.sh@817 -- # '[' -z 1566897 ']' 00:03:28.889 16:50:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:28.889 16:50:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:28.889 16:50:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:28.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:28.889 16:50:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:28.889 16:50:44 -- common/autotest_common.sh@10 -- # set +x 00:03:28.889 [2024-04-18 16:50:44.393305] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:03:28.889 [2024-04-18 16:50:44.393427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566897 ] 00:03:28.889 EAL: No free 2048 kB hugepages reported on node 1 00:03:28.889 [2024-04-18 16:50:44.450628] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:28.889 [2024-04-18 16:50:44.559014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:29.149 16:50:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:29.149 16:50:44 -- common/autotest_common.sh@850 -- # return 0 00:03:29.149 16:50:44 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:29.149 16:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.149 16:50:44 -- common/autotest_common.sh@10 -- # set +x 00:03:29.149 [2024-04-18 16:50:44.821059] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:29.149 request: 00:03:29.149 { 00:03:29.149 "trtype": "tcp", 00:03:29.149 "method": "nvmf_get_transports", 00:03:29.149 "req_id": 1 00:03:29.149 } 00:03:29.149 Got JSON-RPC error response 00:03:29.149 response: 00:03:29.149 { 00:03:29.149 "code": -19, 00:03:29.149 "message": "No such device" 00:03:29.149 } 00:03:29.149 16:50:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:29.149 16:50:44 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:29.149 16:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.149 16:50:44 -- common/autotest_common.sh@10 -- # set +x 00:03:29.149 [2024-04-18 16:50:44.829174] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:29.149 16:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.149 16:50:44 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:29.149 16:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.149 16:50:44 -- common/autotest_common.sh@10 -- # set +x 00:03:29.408 16:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.408 16:50:44 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:29.408 { 00:03:29.408 "subsystems": [ 00:03:29.408 { 00:03:29.408 "subsystem": "vfio_user_target", 00:03:29.408 "config": null 00:03:29.408 }, 00:03:29.408 { 00:03:29.408 "subsystem": "keyring", 00:03:29.408 "config": [] 00:03:29.408 }, 00:03:29.408 { 00:03:29.408 "subsystem": "iobuf", 00:03:29.408 "config": [ 00:03:29.408 { 00:03:29.408 "method": "iobuf_set_options", 00:03:29.408 "params": { 00:03:29.408 "small_pool_count": 8192, 00:03:29.408 "large_pool_count": 1024, 00:03:29.408 "small_bufsize": 8192, 00:03:29.408 "large_bufsize": 135168 00:03:29.408 } 00:03:29.408 } 00:03:29.408 ] 00:03:29.408 }, 00:03:29.408 { 00:03:29.408 "subsystem": "sock", 00:03:29.408 "config": [ 00:03:29.408 { 00:03:29.408 "method": "sock_impl_set_options", 00:03:29.408 "params": { 00:03:29.408 "impl_name": "posix", 00:03:29.408 "recv_buf_size": 2097152, 00:03:29.408 "send_buf_size": 2097152, 00:03:29.408 "enable_recv_pipe": true, 00:03:29.408 "enable_quickack": false, 00:03:29.408 "enable_placement_id": 0, 00:03:29.408 "enable_zerocopy_send_server": true, 00:03:29.408 "enable_zerocopy_send_client": false, 00:03:29.408 "zerocopy_threshold": 0, 00:03:29.408 "tls_version": 0, 00:03:29.408 "enable_ktls": false 00:03:29.408 } 00:03:29.408 }, 00:03:29.408 { 00:03:29.408 "method": "sock_impl_set_options", 00:03:29.408 "params": { 00:03:29.408 "impl_name": "ssl", 00:03:29.408 "recv_buf_size": 4096, 00:03:29.408 "send_buf_size": 4096, 00:03:29.408 "enable_recv_pipe": true, 00:03:29.408 "enable_quickack": false, 00:03:29.408 "enable_placement_id": 0, 00:03:29.408 "enable_zerocopy_send_server": true, 00:03:29.408 "enable_zerocopy_send_client": false, 00:03:29.408 "zerocopy_threshold": 0, 00:03:29.408 "tls_version": 0, 00:03:29.408 "enable_ktls": false 00:03:29.408 } 00:03:29.408 } 00:03:29.408 ] 00:03:29.408 }, 00:03:29.408 { 00:03:29.408 "subsystem": "vmd", 00:03:29.408 "config": [] 00:03:29.408 }, 00:03:29.408 { 00:03:29.408 "subsystem": "accel", 00:03:29.408 "config": [ 00:03:29.408 { 00:03:29.408 "method": "accel_set_options", 00:03:29.408 "params": { 00:03:29.408 "small_cache_size": 128, 00:03:29.408 "large_cache_size": 16, 00:03:29.408 "task_count": 2048, 00:03:29.408 "sequence_count": 2048, 00:03:29.408 "buf_count": 2048 00:03:29.408 } 00:03:29.408 } 00:03:29.408 ] 00:03:29.408 }, 00:03:29.408 { 00:03:29.408 "subsystem": "bdev", 00:03:29.408 "config": [ 00:03:29.408 { 00:03:29.408 "method": "bdev_set_options", 00:03:29.408 "params": { 00:03:29.408 "bdev_io_pool_size": 65535, 00:03:29.408 "bdev_io_cache_size": 256, 00:03:29.408 "bdev_auto_examine": true, 00:03:29.408 "iobuf_small_cache_size": 128, 00:03:29.408 "iobuf_large_cache_size": 16 00:03:29.408 } 00:03:29.408 }, 00:03:29.408 { 00:03:29.408 "method": "bdev_raid_set_options", 00:03:29.408 "params": { 00:03:29.408 "process_window_size_kb": 1024 00:03:29.408 } 00:03:29.408 }, 00:03:29.408 { 00:03:29.408 "method": "bdev_iscsi_set_options", 00:03:29.408 "params": { 00:03:29.408 "timeout_sec": 30 00:03:29.408 } 00:03:29.408 }, 00:03:29.408 { 00:03:29.408 "method": "bdev_nvme_set_options", 00:03:29.408 "params": { 00:03:29.408 "action_on_timeout": "none", 00:03:29.408 "timeout_us": 0, 00:03:29.408 "timeout_admin_us": 0, 00:03:29.408 "keep_alive_timeout_ms": 10000, 00:03:29.408 "arbitration_burst": 0, 00:03:29.408 "low_priority_weight": 0, 00:03:29.408 "medium_priority_weight": 0, 00:03:29.408 "high_priority_weight": 0, 00:03:29.408 "nvme_adminq_poll_period_us": 10000, 00:03:29.408 "nvme_ioq_poll_period_us": 0, 00:03:29.408 "io_queue_requests": 0, 00:03:29.408 "delay_cmd_submit": true, 00:03:29.408 "transport_retry_count": 4, 00:03:29.408 "bdev_retry_count": 3, 00:03:29.408 "transport_ack_timeout": 0, 00:03:29.408 "ctrlr_loss_timeout_sec": 0, 00:03:29.408 "reconnect_delay_sec": 0, 00:03:29.408 "fast_io_fail_timeout_sec": 0, 00:03:29.408 "disable_auto_failback": false, 00:03:29.408 "generate_uuids": false, 00:03:29.408 "transport_tos": 0, 00:03:29.408 "nvme_error_stat": false, 00:03:29.408 "rdma_srq_size": 0, 00:03:29.408 "io_path_stat": false, 00:03:29.408 "allow_accel_sequence": false, 00:03:29.408 "rdma_max_cq_size": 0, 00:03:29.408 "rdma_cm_event_timeout_ms": 0, 00:03:29.408 "dhchap_digests": [ 00:03:29.408 "sha256", 00:03:29.408 "sha384", 00:03:29.408 "sha512" 00:03:29.408 ], 00:03:29.408 "dhchap_dhgroups": [ 00:03:29.408 "null", 00:03:29.408 "ffdhe2048", 00:03:29.408 "ffdhe3072", 00:03:29.408 "ffdhe4096", 00:03:29.408 "ffdhe6144", 00:03:29.408 "ffdhe8192" 00:03:29.408 ] 00:03:29.408 } 00:03:29.408 }, 00:03:29.408 { 00:03:29.408 "method": "bdev_nvme_set_hotplug", 00:03:29.408 "params": { 00:03:29.408 "period_us": 100000, 00:03:29.408 "enable": false 00:03:29.408 } 00:03:29.408 }, 00:03:29.408 { 00:03:29.408 "method": "bdev_wait_for_examine" 00:03:29.408 } 00:03:29.409 ] 00:03:29.409 }, 00:03:29.409 { 00:03:29.409 "subsystem": "scsi", 00:03:29.409 "config": null 00:03:29.409 }, 00:03:29.409 { 00:03:29.409 "subsystem": "scheduler", 00:03:29.409 "config": [ 00:03:29.409 { 00:03:29.409 "method": "framework_set_scheduler", 00:03:29.409 "params": { 00:03:29.409 "name": "static" 00:03:29.409 } 00:03:29.409 } 00:03:29.409 ] 00:03:29.409 }, 00:03:29.409 { 00:03:29.409 "subsystem": "vhost_scsi", 00:03:29.409 "config": [] 00:03:29.409 }, 00:03:29.409 { 00:03:29.409 "subsystem": "vhost_blk", 00:03:29.409 "config": [] 00:03:29.409 }, 00:03:29.409 { 00:03:29.409 "subsystem": "ublk", 00:03:29.409 "config": [] 00:03:29.409 }, 00:03:29.409 { 00:03:29.409 "subsystem": "nbd", 00:03:29.409 "config": [] 00:03:29.409 }, 00:03:29.409 { 00:03:29.409 "subsystem": "nvmf", 00:03:29.409 "config": [ 00:03:29.409 { 00:03:29.409 "method": "nvmf_set_config", 00:03:29.409 "params": { 00:03:29.409 "discovery_filter": "match_any", 00:03:29.409 "admin_cmd_passthru": { 00:03:29.409 "identify_ctrlr": false 00:03:29.409 } 00:03:29.409 } 00:03:29.409 }, 00:03:29.409 { 00:03:29.409 "method": "nvmf_set_max_subsystems", 00:03:29.409 "params": { 00:03:29.409 "max_subsystems": 1024 00:03:29.409 } 00:03:29.409 }, 00:03:29.409 { 00:03:29.409 "method": "nvmf_set_crdt", 00:03:29.409 "params": { 00:03:29.409 "crdt1": 0, 00:03:29.409 "crdt2": 0, 00:03:29.409 "crdt3": 0 00:03:29.409 } 00:03:29.409 }, 00:03:29.409 { 00:03:29.409 "method": "nvmf_create_transport", 00:03:29.409 "params": { 00:03:29.409 "trtype": "TCP", 00:03:29.409 "max_queue_depth": 128, 00:03:29.409 "max_io_qpairs_per_ctrlr": 127, 00:03:29.409 "in_capsule_data_size": 4096, 00:03:29.409 "max_io_size": 131072, 00:03:29.409 "io_unit_size": 131072, 00:03:29.409 "max_aq_depth": 128, 00:03:29.409 "num_shared_buffers": 511, 00:03:29.409 "buf_cache_size": 4294967295, 00:03:29.409 "dif_insert_or_strip": false, 00:03:29.409 "zcopy": false, 00:03:29.409 "c2h_success": true, 00:03:29.409 "sock_priority": 0, 00:03:29.409 "abort_timeout_sec": 1, 00:03:29.409 "ack_timeout": 0 00:03:29.409 } 00:03:29.409 } 00:03:29.409 ] 00:03:29.409 }, 00:03:29.409 { 00:03:29.409 "subsystem": "iscsi", 00:03:29.409 "config": [ 00:03:29.409 { 00:03:29.409 "method": "iscsi_set_options", 00:03:29.409 "params": { 00:03:29.409 "node_base": "iqn.2016-06.io.spdk", 00:03:29.409 "max_sessions": 128, 00:03:29.409 "max_connections_per_session": 2, 00:03:29.409 "max_queue_depth": 64, 00:03:29.409 "default_time2wait": 2, 00:03:29.409 "default_time2retain": 20, 00:03:29.409 "first_burst_length": 8192, 00:03:29.409 "immediate_data": true, 00:03:29.409 "allow_duplicated_isid": false, 00:03:29.409 "error_recovery_level": 0, 00:03:29.409 "nop_timeout": 60, 00:03:29.409 "nop_in_interval": 30, 00:03:29.409 "disable_chap": false, 00:03:29.409 "require_chap": false, 00:03:29.409 "mutual_chap": false, 00:03:29.409 "chap_group": 0, 00:03:29.409 "max_large_datain_per_connection": 64, 00:03:29.409 "max_r2t_per_connection": 4, 00:03:29.409 "pdu_pool_size": 36864, 00:03:29.409 "immediate_data_pool_size": 16384, 00:03:29.409 "data_out_pool_size": 2048 00:03:29.409 } 00:03:29.409 } 00:03:29.409 ] 00:03:29.409 } 00:03:29.409 ] 00:03:29.409 } 00:03:29.409 16:50:44 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:29.409 16:50:44 -- rpc/skip_rpc.sh@40 -- # killprocess 1566897 00:03:29.409 16:50:44 -- common/autotest_common.sh@936 -- # '[' -z 1566897 ']' 00:03:29.409 16:50:44 -- common/autotest_common.sh@940 -- # kill -0 1566897 00:03:29.409 16:50:44 -- common/autotest_common.sh@941 -- # uname 00:03:29.409 16:50:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:29.409 16:50:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1566897 00:03:29.409 16:50:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:29.409 16:50:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:29.409 16:50:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1566897' 00:03:29.409 killing process with pid 1566897 00:03:29.409 16:50:45 -- common/autotest_common.sh@955 -- # kill 1566897 00:03:29.409 16:50:45 -- common/autotest_common.sh@960 -- # wait 1566897 00:03:29.976 16:50:45 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1567039 00:03:29.976 16:50:45 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:29.976 16:50:45 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:35.243 16:50:50 -- rpc/skip_rpc.sh@50 -- # killprocess 1567039 00:03:35.243 16:50:50 -- common/autotest_common.sh@936 -- # '[' -z 1567039 ']' 00:03:35.243 16:50:50 -- common/autotest_common.sh@940 -- # kill -0 1567039 00:03:35.243 16:50:50 -- common/autotest_common.sh@941 -- # uname 00:03:35.243 16:50:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:35.243 16:50:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1567039 00:03:35.243 16:50:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:35.243 16:50:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:35.243 16:50:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1567039' 00:03:35.243 killing process with pid 1567039 00:03:35.243 16:50:50 -- common/autotest_common.sh@955 -- # kill 1567039 00:03:35.243 16:50:50 -- common/autotest_common.sh@960 -- # wait 1567039 00:03:35.501 16:50:50 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:35.501 16:50:50 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:35.501 00:03:35.501 real 0m6.626s 00:03:35.501 user 0m6.207s 00:03:35.501 sys 0m0.693s 00:03:35.501 16:50:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.501 16:50:50 -- common/autotest_common.sh@10 -- # set +x 00:03:35.501 ************************************ 00:03:35.501 END TEST skip_rpc_with_json 00:03:35.501 ************************************ 00:03:35.501 16:50:50 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:35.501 16:50:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.501 16:50:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.501 16:50:50 -- common/autotest_common.sh@10 -- # set +x 00:03:35.501 ************************************ 00:03:35.501 START TEST skip_rpc_with_delay 00:03:35.501 ************************************ 00:03:35.501 16:50:51 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:03:35.501 16:50:51 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:35.502 16:50:51 -- common/autotest_common.sh@638 -- # local es=0 00:03:35.502 16:50:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:35.502 16:50:51 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.502 16:50:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:35.502 16:50:51 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.502 16:50:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:35.502 16:50:51 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.502 16:50:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:35.502 16:50:51 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.502 16:50:51 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:35.502 16:50:51 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:35.502 [2024-04-18 16:50:51.139916] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:35.502 [2024-04-18 16:50:51.140025] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:35.502 16:50:51 -- common/autotest_common.sh@641 -- # es=1 00:03:35.502 16:50:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:35.502 16:50:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:35.502 16:50:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:35.502 00:03:35.502 real 0m0.065s 00:03:35.502 user 0m0.040s 00:03:35.502 sys 0m0.024s 00:03:35.502 16:50:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.502 16:50:51 -- common/autotest_common.sh@10 -- # set +x 00:03:35.502 ************************************ 00:03:35.502 END TEST skip_rpc_with_delay 00:03:35.502 ************************************ 00:03:35.502 16:50:51 -- rpc/skip_rpc.sh@77 -- # uname 00:03:35.502 16:50:51 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:35.502 16:50:51 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:35.502 16:50:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.502 16:50:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.502 16:50:51 -- common/autotest_common.sh@10 -- # set +x 00:03:35.760 ************************************ 00:03:35.760 START TEST exit_on_failed_rpc_init 00:03:35.760 ************************************ 00:03:35.760 16:50:51 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:03:35.760 16:50:51 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1567771 00:03:35.760 16:50:51 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:35.760 16:50:51 -- rpc/skip_rpc.sh@63 -- # waitforlisten 1567771 00:03:35.760 16:50:51 -- common/autotest_common.sh@817 -- # '[' -z 1567771 ']' 00:03:35.760 16:50:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:35.760 16:50:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:35.760 16:50:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:35.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:35.760 16:50:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:35.760 16:50:51 -- common/autotest_common.sh@10 -- # set +x 00:03:35.760 [2024-04-18 16:50:51.325264] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:03:35.760 [2024-04-18 16:50:51.325353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1567771 ] 00:03:35.760 EAL: No free 2048 kB hugepages reported on node 1 00:03:35.760 [2024-04-18 16:50:51.386835] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.019 [2024-04-18 16:50:51.499877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.586 16:50:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:36.586 16:50:52 -- common/autotest_common.sh@850 -- # return 0 00:03:36.586 16:50:52 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:36.586 16:50:52 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:36.586 16:50:52 -- common/autotest_common.sh@638 -- # local es=0 00:03:36.586 16:50:52 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:36.586 16:50:52 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.586 16:50:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:36.586 16:50:52 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.586 16:50:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:36.586 16:50:52 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.586 16:50:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:36.586 16:50:52 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.586 16:50:52 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:36.586 16:50:52 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:36.845 [2024-04-18 16:50:52.304059] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:03:36.845 [2024-04-18 16:50:52.304131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1567909 ] 00:03:36.845 EAL: No free 2048 kB hugepages reported on node 1 00:03:36.845 [2024-04-18 16:50:52.364732] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.845 [2024-04-18 16:50:52.483014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:36.845 [2024-04-18 16:50:52.483155] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:36.845 [2024-04-18 16:50:52.483177] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:36.845 [2024-04-18 16:50:52.483191] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:37.103 16:50:52 -- common/autotest_common.sh@641 -- # es=234 00:03:37.103 16:50:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:37.103 16:50:52 -- common/autotest_common.sh@650 -- # es=106 00:03:37.103 16:50:52 -- common/autotest_common.sh@651 -- # case "$es" in 00:03:37.103 16:50:52 -- common/autotest_common.sh@658 -- # es=1 00:03:37.103 16:50:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:37.103 16:50:52 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:37.103 16:50:52 -- rpc/skip_rpc.sh@70 -- # killprocess 1567771 00:03:37.103 16:50:52 -- common/autotest_common.sh@936 -- # '[' -z 1567771 ']' 00:03:37.103 16:50:52 -- common/autotest_common.sh@940 -- # kill -0 1567771 00:03:37.103 16:50:52 -- common/autotest_common.sh@941 -- # uname 00:03:37.103 16:50:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:37.104 16:50:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1567771 00:03:37.104 16:50:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:37.104 16:50:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:37.104 16:50:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1567771' 00:03:37.104 killing process with pid 1567771 00:03:37.104 16:50:52 -- common/autotest_common.sh@955 -- # kill 1567771 00:03:37.104 16:50:52 -- common/autotest_common.sh@960 -- # wait 1567771 00:03:37.670 00:03:37.670 real 0m1.819s 00:03:37.670 user 0m2.173s 00:03:37.670 sys 0m0.485s 00:03:37.670 16:50:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:37.670 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:03:37.670 ************************************ 00:03:37.670 END TEST exit_on_failed_rpc_init 00:03:37.670 ************************************ 00:03:37.670 16:50:53 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:37.670 00:03:37.670 real 0m14.521s 00:03:37.670 user 0m13.803s 00:03:37.670 sys 0m1.797s 00:03:37.670 16:50:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:37.670 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:03:37.670 ************************************ 00:03:37.670 END TEST skip_rpc 00:03:37.670 ************************************ 00:03:37.670 16:50:53 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:37.670 16:50:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.670 16:50:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.670 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:03:37.670 ************************************ 00:03:37.670 START TEST rpc_client 00:03:37.670 ************************************ 00:03:37.670 16:50:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:37.670 * Looking for test storage... 00:03:37.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:37.670 16:50:53 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:37.670 OK 00:03:37.670 16:50:53 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:37.670 00:03:37.670 real 0m0.072s 00:03:37.670 user 0m0.035s 00:03:37.670 sys 0m0.042s 00:03:37.670 16:50:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:37.670 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:03:37.670 ************************************ 00:03:37.670 END TEST rpc_client 00:03:37.670 ************************************ 00:03:37.670 16:50:53 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:37.670 16:50:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.670 16:50:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.670 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:03:37.930 ************************************ 00:03:37.930 START TEST json_config 00:03:37.930 ************************************ 00:03:37.930 16:50:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:37.930 16:50:53 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:37.930 16:50:53 -- nvmf/common.sh@7 -- # uname -s 00:03:37.930 16:50:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:37.930 16:50:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:37.930 16:50:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:37.930 16:50:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:37.930 16:50:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:37.930 16:50:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:37.930 16:50:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:37.930 16:50:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:37.930 16:50:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:37.930 16:50:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:37.930 16:50:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:37.930 16:50:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:37.930 16:50:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:37.930 16:50:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:37.930 16:50:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:37.930 16:50:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:37.930 16:50:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:37.930 16:50:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:37.930 16:50:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:37.930 16:50:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:37.930 16:50:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.930 16:50:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.930 16:50:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.930 16:50:53 -- paths/export.sh@5 -- # export PATH 00:03:37.930 16:50:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.930 16:50:53 -- nvmf/common.sh@47 -- # : 0 00:03:37.930 16:50:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:37.930 16:50:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:37.930 16:50:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:37.930 16:50:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:37.930 16:50:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:37.930 16:50:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:37.930 16:50:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:37.930 16:50:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:37.930 16:50:53 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:37.930 16:50:53 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:37.930 16:50:53 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:37.930 16:50:53 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:37.930 16:50:53 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:37.930 16:50:53 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:37.930 16:50:53 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:37.930 16:50:53 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:37.930 16:50:53 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:37.930 16:50:53 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:37.930 16:50:53 -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:37.930 16:50:53 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:37.930 16:50:53 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:37.930 16:50:53 -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:37.930 16:50:53 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:37.930 16:50:53 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:37.930 INFO: JSON configuration test init 00:03:37.930 16:50:53 -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:37.930 16:50:53 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:37.930 16:50:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:37.930 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:03:37.930 16:50:53 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:37.930 16:50:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:37.930 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:03:37.930 16:50:53 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:37.930 16:50:53 -- json_config/common.sh@9 -- # local app=target 00:03:37.930 16:50:53 -- json_config/common.sh@10 -- # shift 00:03:37.930 16:50:53 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:37.930 16:50:53 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:37.930 16:50:53 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:37.930 16:50:53 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:37.931 16:50:53 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:37.931 16:50:53 -- json_config/common.sh@22 -- # app_pid["$app"]=1568168 00:03:37.931 16:50:53 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:37.931 16:50:53 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:37.931 Waiting for target to run... 00:03:37.931 16:50:53 -- json_config/common.sh@25 -- # waitforlisten 1568168 /var/tmp/spdk_tgt.sock 00:03:37.931 16:50:53 -- common/autotest_common.sh@817 -- # '[' -z 1568168 ']' 00:03:37.931 16:50:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:37.931 16:50:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:37.931 16:50:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:37.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:37.931 16:50:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:37.931 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:03:37.931 [2024-04-18 16:50:53.539282] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:03:37.931 [2024-04-18 16:50:53.539388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1568168 ] 00:03:37.931 EAL: No free 2048 kB hugepages reported on node 1 00:03:38.189 [2024-04-18 16:50:53.872557] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.447 [2024-04-18 16:50:53.959214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.014 16:50:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:39.014 16:50:54 -- common/autotest_common.sh@850 -- # return 0 00:03:39.014 16:50:54 -- json_config/common.sh@26 -- # echo '' 00:03:39.014 00:03:39.014 16:50:54 -- json_config/json_config.sh@269 -- # create_accel_config 00:03:39.014 16:50:54 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:39.014 16:50:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:39.014 16:50:54 -- common/autotest_common.sh@10 -- # set +x 00:03:39.014 16:50:54 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:39.014 16:50:54 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:39.014 16:50:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:39.014 16:50:54 -- common/autotest_common.sh@10 -- # set +x 00:03:39.014 16:50:54 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:39.014 16:50:54 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:39.014 16:50:54 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:42.299 16:50:57 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:42.299 16:50:57 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:42.299 16:50:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:42.299 16:50:57 -- common/autotest_common.sh@10 -- # set +x 00:03:42.299 16:50:57 -- json_config/json_config.sh@45 -- # local ret=0 00:03:42.299 16:50:57 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:42.299 16:50:57 -- json_config/json_config.sh@46 -- # local enabled_types 00:03:42.299 16:50:57 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:42.299 16:50:57 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:42.299 16:50:57 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:42.299 16:50:57 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:42.299 16:50:57 -- json_config/json_config.sh@48 -- # local get_types 00:03:42.299 16:50:57 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:42.299 16:50:57 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:42.299 16:50:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:42.299 16:50:57 -- common/autotest_common.sh@10 -- # set +x 00:03:42.299 16:50:57 -- json_config/json_config.sh@55 -- # return 0 00:03:42.299 16:50:57 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:42.299 16:50:57 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:42.299 16:50:57 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:42.299 16:50:57 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:42.299 16:50:57 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:42.299 16:50:57 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:42.299 16:50:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:42.299 16:50:57 -- common/autotest_common.sh@10 -- # set +x 00:03:42.299 16:50:57 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:42.299 16:50:57 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:42.299 16:50:57 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:42.299 16:50:57 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:42.299 16:50:57 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:42.556 MallocForNvmf0 00:03:42.556 16:50:58 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:42.556 16:50:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:42.814 MallocForNvmf1 00:03:42.814 16:50:58 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:42.814 16:50:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:43.093 [2024-04-18 16:50:58.605945] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:43.093 16:50:58 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:43.093 16:50:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:43.355 16:50:58 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:43.355 16:50:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:43.612 16:50:59 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:43.612 16:50:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:43.870 16:50:59 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:43.870 16:50:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:43.870 [2024-04-18 16:50:59.553043] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:43.870 16:50:59 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:43.870 16:50:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:43.870 16:50:59 -- common/autotest_common.sh@10 -- # set +x 00:03:44.127 16:50:59 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:44.127 16:50:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:44.127 16:50:59 -- common/autotest_common.sh@10 -- # set +x 00:03:44.127 16:50:59 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:44.127 16:50:59 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:44.127 16:50:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:44.384 MallocBdevForConfigChangeCheck 00:03:44.384 16:50:59 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:44.384 16:50:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:44.384 16:50:59 -- common/autotest_common.sh@10 -- # set +x 00:03:44.384 16:50:59 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:44.384 16:50:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:44.642 16:51:00 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:44.642 INFO: shutting down applications... 00:03:44.642 16:51:00 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:44.642 16:51:00 -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:44.642 16:51:00 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:44.642 16:51:00 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:46.540 Calling clear_iscsi_subsystem 00:03:46.540 Calling clear_nvmf_subsystem 00:03:46.540 Calling clear_nbd_subsystem 00:03:46.540 Calling clear_ublk_subsystem 00:03:46.540 Calling clear_vhost_blk_subsystem 00:03:46.540 Calling clear_vhost_scsi_subsystem 00:03:46.540 Calling clear_bdev_subsystem 00:03:46.540 16:51:01 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:46.540 16:51:01 -- json_config/json_config.sh@343 -- # count=100 00:03:46.540 16:51:01 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:46.540 16:51:01 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:46.540 16:51:01 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:46.540 16:51:01 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:46.797 16:51:02 -- json_config/json_config.sh@345 -- # break 00:03:46.797 16:51:02 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:46.797 16:51:02 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:46.797 16:51:02 -- json_config/common.sh@31 -- # local app=target 00:03:46.797 16:51:02 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:46.797 16:51:02 -- json_config/common.sh@35 -- # [[ -n 1568168 ]] 00:03:46.797 16:51:02 -- json_config/common.sh@38 -- # kill -SIGINT 1568168 00:03:46.797 16:51:02 -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:46.797 16:51:02 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:46.797 16:51:02 -- json_config/common.sh@41 -- # kill -0 1568168 00:03:46.797 16:51:02 -- json_config/common.sh@45 -- # sleep 0.5 00:03:47.364 16:51:02 -- json_config/common.sh@40 -- # (( i++ )) 00:03:47.364 16:51:02 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:47.364 16:51:02 -- json_config/common.sh@41 -- # kill -0 1568168 00:03:47.364 16:51:02 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:47.364 16:51:02 -- json_config/common.sh@43 -- # break 00:03:47.364 16:51:02 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:47.364 16:51:02 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:47.364 SPDK target shutdown done 00:03:47.364 16:51:02 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:47.364 INFO: relaunching applications... 00:03:47.364 16:51:02 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.364 16:51:02 -- json_config/common.sh@9 -- # local app=target 00:03:47.364 16:51:02 -- json_config/common.sh@10 -- # shift 00:03:47.364 16:51:02 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:47.364 16:51:02 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:47.364 16:51:02 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:47.364 16:51:02 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.364 16:51:02 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.364 16:51:02 -- json_config/common.sh@22 -- # app_pid["$app"]=1569475 00:03:47.364 16:51:02 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.364 16:51:02 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:47.364 Waiting for target to run... 00:03:47.364 16:51:02 -- json_config/common.sh@25 -- # waitforlisten 1569475 /var/tmp/spdk_tgt.sock 00:03:47.364 16:51:02 -- common/autotest_common.sh@817 -- # '[' -z 1569475 ']' 00:03:47.364 16:51:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:47.364 16:51:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:47.364 16:51:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:47.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:47.364 16:51:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:47.364 16:51:02 -- common/autotest_common.sh@10 -- # set +x 00:03:47.364 [2024-04-18 16:51:02.838949] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:03:47.364 [2024-04-18 16:51:02.839046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569475 ] 00:03:47.364 EAL: No free 2048 kB hugepages reported on node 1 00:03:47.930 [2024-04-18 16:51:03.357545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.930 [2024-04-18 16:51:03.455049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.211 [2024-04-18 16:51:06.487480] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:51.211 [2024-04-18 16:51:06.519960] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:51.776 16:51:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:51.776 16:51:07 -- common/autotest_common.sh@850 -- # return 0 00:03:51.776 16:51:07 -- json_config/common.sh@26 -- # echo '' 00:03:51.776 00:03:51.776 16:51:07 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:03:51.776 16:51:07 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:51.776 INFO: Checking if target configuration is the same... 00:03:51.776 16:51:07 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:51.776 16:51:07 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:03:51.776 16:51:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:51.776 + '[' 2 -ne 2 ']' 00:03:51.776 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:51.776 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:51.776 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:51.776 +++ basename /dev/fd/62 00:03:51.776 ++ mktemp /tmp/62.XXX 00:03:51.776 + tmp_file_1=/tmp/62.gXc 00:03:51.776 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:51.776 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:51.776 + tmp_file_2=/tmp/spdk_tgt_config.json.uU3 00:03:51.776 + ret=0 00:03:51.776 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:52.034 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:52.034 + diff -u /tmp/62.gXc /tmp/spdk_tgt_config.json.uU3 00:03:52.034 + echo 'INFO: JSON config files are the same' 00:03:52.034 INFO: JSON config files are the same 00:03:52.034 + rm /tmp/62.gXc /tmp/spdk_tgt_config.json.uU3 00:03:52.034 + exit 0 00:03:52.034 16:51:07 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:03:52.034 16:51:07 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:52.034 INFO: changing configuration and checking if this can be detected... 00:03:52.034 16:51:07 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:52.034 16:51:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:52.292 16:51:07 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.292 16:51:07 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:03:52.292 16:51:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:52.292 + '[' 2 -ne 2 ']' 00:03:52.292 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:52.292 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:52.292 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.292 +++ basename /dev/fd/62 00:03:52.292 ++ mktemp /tmp/62.XXX 00:03:52.292 + tmp_file_1=/tmp/62.8X3 00:03:52.292 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.292 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:52.292 + tmp_file_2=/tmp/spdk_tgt_config.json.4io 00:03:52.292 + ret=0 00:03:52.292 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:52.859 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:52.859 + diff -u /tmp/62.8X3 /tmp/spdk_tgt_config.json.4io 00:03:52.859 + ret=1 00:03:52.859 + echo '=== Start of file: /tmp/62.8X3 ===' 00:03:52.859 + cat /tmp/62.8X3 00:03:52.859 + echo '=== End of file: /tmp/62.8X3 ===' 00:03:52.859 + echo '' 00:03:52.859 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4io ===' 00:03:52.859 + cat /tmp/spdk_tgt_config.json.4io 00:03:52.859 + echo '=== End of file: /tmp/spdk_tgt_config.json.4io ===' 00:03:52.859 + echo '' 00:03:52.859 + rm /tmp/62.8X3 /tmp/spdk_tgt_config.json.4io 00:03:52.859 + exit 1 00:03:52.859 16:51:08 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:03:52.859 INFO: configuration change detected. 00:03:52.859 16:51:08 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:03:52.859 16:51:08 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:03:52.859 16:51:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:52.859 16:51:08 -- common/autotest_common.sh@10 -- # set +x 00:03:52.859 16:51:08 -- json_config/json_config.sh@307 -- # local ret=0 00:03:52.859 16:51:08 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:03:52.859 16:51:08 -- json_config/json_config.sh@317 -- # [[ -n 1569475 ]] 00:03:52.859 16:51:08 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:03:52.859 16:51:08 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:03:52.859 16:51:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:52.859 16:51:08 -- common/autotest_common.sh@10 -- # set +x 00:03:52.859 16:51:08 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:03:52.859 16:51:08 -- json_config/json_config.sh@193 -- # uname -s 00:03:52.859 16:51:08 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:03:52.859 16:51:08 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:03:52.859 16:51:08 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:03:52.859 16:51:08 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:03:52.859 16:51:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:52.859 16:51:08 -- common/autotest_common.sh@10 -- # set +x 00:03:52.859 16:51:08 -- json_config/json_config.sh@323 -- # killprocess 1569475 00:03:52.859 16:51:08 -- common/autotest_common.sh@936 -- # '[' -z 1569475 ']' 00:03:52.859 16:51:08 -- common/autotest_common.sh@940 -- # kill -0 1569475 00:03:52.859 16:51:08 -- common/autotest_common.sh@941 -- # uname 00:03:52.859 16:51:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:52.859 16:51:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1569475 00:03:52.859 16:51:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:52.859 16:51:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:52.859 16:51:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1569475' 00:03:52.859 killing process with pid 1569475 00:03:52.859 16:51:08 -- common/autotest_common.sh@955 -- # kill 1569475 00:03:52.859 16:51:08 -- common/autotest_common.sh@960 -- # wait 1569475 00:03:54.757 16:51:10 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:54.757 16:51:10 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:03:54.757 16:51:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:54.757 16:51:10 -- common/autotest_common.sh@10 -- # set +x 00:03:54.757 16:51:10 -- json_config/json_config.sh@328 -- # return 0 00:03:54.757 16:51:10 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:03:54.757 INFO: Success 00:03:54.757 00:03:54.757 real 0m16.682s 00:03:54.757 user 0m18.617s 00:03:54.757 sys 0m2.037s 00:03:54.757 16:51:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:54.757 16:51:10 -- common/autotest_common.sh@10 -- # set +x 00:03:54.757 ************************************ 00:03:54.757 END TEST json_config 00:03:54.757 ************************************ 00:03:54.757 16:51:10 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:54.757 16:51:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:54.757 16:51:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:54.757 16:51:10 -- common/autotest_common.sh@10 -- # set +x 00:03:54.757 ************************************ 00:03:54.757 START TEST json_config_extra_key 00:03:54.757 ************************************ 00:03:54.757 16:51:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:54.757 16:51:10 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:54.757 16:51:10 -- nvmf/common.sh@7 -- # uname -s 00:03:54.757 16:51:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:54.757 16:51:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:54.757 16:51:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:54.757 16:51:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:54.757 16:51:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:54.757 16:51:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:54.757 16:51:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:54.757 16:51:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:54.757 16:51:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:54.757 16:51:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:54.757 16:51:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:54.757 16:51:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:54.757 16:51:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:54.757 16:51:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:54.757 16:51:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:54.757 16:51:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:54.757 16:51:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:54.757 16:51:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:54.757 16:51:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.757 16:51:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.757 16:51:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.757 16:51:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.757 16:51:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.757 16:51:10 -- paths/export.sh@5 -- # export PATH 00:03:54.757 16:51:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.757 16:51:10 -- nvmf/common.sh@47 -- # : 0 00:03:54.757 16:51:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:54.757 16:51:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:54.757 16:51:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:54.757 16:51:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:54.757 16:51:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:54.757 16:51:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:54.757 16:51:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:54.757 16:51:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:54.757 16:51:10 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:54.757 16:51:10 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:54.757 16:51:10 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:54.757 16:51:10 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:54.758 16:51:10 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:54.758 16:51:10 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:54.758 16:51:10 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:54.758 16:51:10 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:54.758 16:51:10 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:54.758 16:51:10 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:54.758 16:51:10 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:54.758 INFO: launching applications... 00:03:54.758 16:51:10 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:54.758 16:51:10 -- json_config/common.sh@9 -- # local app=target 00:03:54.758 16:51:10 -- json_config/common.sh@10 -- # shift 00:03:54.758 16:51:10 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:54.758 16:51:10 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:54.758 16:51:10 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:54.758 16:51:10 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:54.758 16:51:10 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:54.758 16:51:10 -- json_config/common.sh@22 -- # app_pid["$app"]=1571028 00:03:54.758 16:51:10 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:54.758 16:51:10 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:54.758 Waiting for target to run... 00:03:54.758 16:51:10 -- json_config/common.sh@25 -- # waitforlisten 1571028 /var/tmp/spdk_tgt.sock 00:03:54.758 16:51:10 -- common/autotest_common.sh@817 -- # '[' -z 1571028 ']' 00:03:54.758 16:51:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:54.758 16:51:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:54.758 16:51:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:54.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:54.758 16:51:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:54.758 16:51:10 -- common/autotest_common.sh@10 -- # set +x 00:03:54.758 [2024-04-18 16:51:10.341480] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:03:54.758 [2024-04-18 16:51:10.341581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571028 ] 00:03:54.758 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.016 [2024-04-18 16:51:10.691055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.274 [2024-04-18 16:51:10.777646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.840 16:51:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:55.840 16:51:11 -- common/autotest_common.sh@850 -- # return 0 00:03:55.840 16:51:11 -- json_config/common.sh@26 -- # echo '' 00:03:55.840 00:03:55.840 16:51:11 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:55.840 INFO: shutting down applications... 00:03:55.840 16:51:11 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:55.840 16:51:11 -- json_config/common.sh@31 -- # local app=target 00:03:55.840 16:51:11 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:55.840 16:51:11 -- json_config/common.sh@35 -- # [[ -n 1571028 ]] 00:03:55.840 16:51:11 -- json_config/common.sh@38 -- # kill -SIGINT 1571028 00:03:55.840 16:51:11 -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:55.840 16:51:11 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.840 16:51:11 -- json_config/common.sh@41 -- # kill -0 1571028 00:03:55.840 16:51:11 -- json_config/common.sh@45 -- # sleep 0.5 00:03:56.099 16:51:11 -- json_config/common.sh@40 -- # (( i++ )) 00:03:56.099 16:51:11 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:56.099 16:51:11 -- json_config/common.sh@41 -- # kill -0 1571028 00:03:56.099 16:51:11 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:56.099 16:51:11 -- json_config/common.sh@43 -- # break 00:03:56.099 16:51:11 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:56.099 16:51:11 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:56.099 SPDK target shutdown done 00:03:56.099 16:51:11 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:56.099 Success 00:03:56.099 00:03:56.099 real 0m1.529s 00:03:56.099 user 0m1.513s 00:03:56.099 sys 0m0.444s 00:03:56.099 16:51:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.099 16:51:11 -- common/autotest_common.sh@10 -- # set +x 00:03:56.099 ************************************ 00:03:56.099 END TEST json_config_extra_key 00:03:56.099 ************************************ 00:03:56.099 16:51:11 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:56.099 16:51:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.099 16:51:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.099 16:51:11 -- common/autotest_common.sh@10 -- # set +x 00:03:56.357 ************************************ 00:03:56.357 START TEST alias_rpc 00:03:56.357 ************************************ 00:03:56.357 16:51:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:56.357 * Looking for test storage... 00:03:56.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:56.357 16:51:11 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:56.357 16:51:11 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1571225 00:03:56.357 16:51:11 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.357 16:51:11 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1571225 00:03:56.357 16:51:11 -- common/autotest_common.sh@817 -- # '[' -z 1571225 ']' 00:03:56.357 16:51:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.357 16:51:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:56.357 16:51:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.357 16:51:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:56.357 16:51:11 -- common/autotest_common.sh@10 -- # set +x 00:03:56.357 [2024-04-18 16:51:11.993903] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:03:56.357 [2024-04-18 16:51:11.993996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571225 ] 00:03:56.357 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.357 [2024-04-18 16:51:12.049579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.617 [2024-04-18 16:51:12.154176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.875 16:51:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:56.875 16:51:12 -- common/autotest_common.sh@850 -- # return 0 00:03:56.875 16:51:12 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:57.133 16:51:12 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1571225 00:03:57.133 16:51:12 -- common/autotest_common.sh@936 -- # '[' -z 1571225 ']' 00:03:57.133 16:51:12 -- common/autotest_common.sh@940 -- # kill -0 1571225 00:03:57.133 16:51:12 -- common/autotest_common.sh@941 -- # uname 00:03:57.133 16:51:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:57.133 16:51:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1571225 00:03:57.133 16:51:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:57.133 16:51:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:57.133 16:51:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1571225' 00:03:57.133 killing process with pid 1571225 00:03:57.133 16:51:12 -- common/autotest_common.sh@955 -- # kill 1571225 00:03:57.133 16:51:12 -- common/autotest_common.sh@960 -- # wait 1571225 00:03:57.700 00:03:57.700 real 0m1.275s 00:03:57.700 user 0m1.352s 00:03:57.700 sys 0m0.409s 00:03:57.700 16:51:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:57.700 16:51:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.700 ************************************ 00:03:57.700 END TEST alias_rpc 00:03:57.700 ************************************ 00:03:57.700 16:51:13 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:03:57.700 16:51:13 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:57.700 16:51:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.700 16:51:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.700 16:51:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.700 ************************************ 00:03:57.700 START TEST spdkcli_tcp 00:03:57.700 ************************************ 00:03:57.700 16:51:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:57.700 * Looking for test storage... 00:03:57.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:57.700 16:51:13 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:57.700 16:51:13 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:57.700 16:51:13 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:57.700 16:51:13 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:57.700 16:51:13 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:57.700 16:51:13 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:57.700 16:51:13 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:57.700 16:51:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:57.700 16:51:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.700 16:51:13 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1571425 00:03:57.700 16:51:13 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:57.700 16:51:13 -- spdkcli/tcp.sh@27 -- # waitforlisten 1571425 00:03:57.700 16:51:13 -- common/autotest_common.sh@817 -- # '[' -z 1571425 ']' 00:03:57.700 16:51:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.700 16:51:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:57.700 16:51:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.700 16:51:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:57.700 16:51:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.701 [2024-04-18 16:51:13.399241] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:03:57.701 [2024-04-18 16:51:13.399319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571425 ] 00:03:57.968 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.969 [2024-04-18 16:51:13.460403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:57.969 [2024-04-18 16:51:13.582408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:57.969 [2024-04-18 16:51:13.582413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.910 16:51:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:58.910 16:51:14 -- common/autotest_common.sh@850 -- # return 0 00:03:58.910 16:51:14 -- spdkcli/tcp.sh@31 -- # socat_pid=1571562 00:03:58.910 16:51:14 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:58.910 16:51:14 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:58.910 [ 00:03:58.910 "bdev_malloc_delete", 00:03:58.910 "bdev_malloc_create", 00:03:58.910 "bdev_null_resize", 00:03:58.910 "bdev_null_delete", 00:03:58.910 "bdev_null_create", 00:03:58.910 "bdev_nvme_cuse_unregister", 00:03:58.910 "bdev_nvme_cuse_register", 00:03:58.910 "bdev_opal_new_user", 00:03:58.910 "bdev_opal_set_lock_state", 00:03:58.910 "bdev_opal_delete", 00:03:58.910 "bdev_opal_get_info", 00:03:58.910 "bdev_opal_create", 00:03:58.910 "bdev_nvme_opal_revert", 00:03:58.910 "bdev_nvme_opal_init", 00:03:58.910 "bdev_nvme_send_cmd", 00:03:58.910 "bdev_nvme_get_path_iostat", 00:03:58.910 "bdev_nvme_get_mdns_discovery_info", 00:03:58.910 "bdev_nvme_stop_mdns_discovery", 00:03:58.910 "bdev_nvme_start_mdns_discovery", 00:03:58.910 "bdev_nvme_set_multipath_policy", 00:03:58.910 "bdev_nvme_set_preferred_path", 00:03:58.910 "bdev_nvme_get_io_paths", 00:03:58.910 "bdev_nvme_remove_error_injection", 00:03:58.910 "bdev_nvme_add_error_injection", 00:03:58.910 "bdev_nvme_get_discovery_info", 00:03:58.910 "bdev_nvme_stop_discovery", 00:03:58.910 "bdev_nvme_start_discovery", 00:03:58.910 "bdev_nvme_get_controller_health_info", 00:03:58.910 "bdev_nvme_disable_controller", 00:03:58.910 "bdev_nvme_enable_controller", 00:03:58.910 "bdev_nvme_reset_controller", 00:03:58.910 "bdev_nvme_get_transport_statistics", 00:03:58.910 "bdev_nvme_apply_firmware", 00:03:58.910 "bdev_nvme_detach_controller", 00:03:58.910 "bdev_nvme_get_controllers", 00:03:58.910 "bdev_nvme_attach_controller", 00:03:58.910 "bdev_nvme_set_hotplug", 00:03:58.910 "bdev_nvme_set_options", 00:03:58.910 "bdev_passthru_delete", 00:03:58.910 "bdev_passthru_create", 00:03:58.910 "bdev_lvol_grow_lvstore", 00:03:58.910 "bdev_lvol_get_lvols", 00:03:58.910 "bdev_lvol_get_lvstores", 00:03:58.910 "bdev_lvol_delete", 00:03:58.910 "bdev_lvol_set_read_only", 00:03:58.910 "bdev_lvol_resize", 00:03:58.910 "bdev_lvol_decouple_parent", 00:03:58.910 "bdev_lvol_inflate", 00:03:58.910 "bdev_lvol_rename", 00:03:58.910 "bdev_lvol_clone_bdev", 00:03:58.910 "bdev_lvol_clone", 00:03:58.910 "bdev_lvol_snapshot", 00:03:58.910 "bdev_lvol_create", 00:03:58.910 "bdev_lvol_delete_lvstore", 00:03:58.910 "bdev_lvol_rename_lvstore", 00:03:58.910 "bdev_lvol_create_lvstore", 00:03:58.910 "bdev_raid_set_options", 00:03:58.910 "bdev_raid_remove_base_bdev", 00:03:58.910 "bdev_raid_add_base_bdev", 00:03:58.910 "bdev_raid_delete", 00:03:58.911 "bdev_raid_create", 00:03:58.911 "bdev_raid_get_bdevs", 00:03:58.911 "bdev_error_inject_error", 00:03:58.911 "bdev_error_delete", 00:03:58.911 "bdev_error_create", 00:03:58.911 "bdev_split_delete", 00:03:58.911 "bdev_split_create", 00:03:58.911 "bdev_delay_delete", 00:03:58.911 "bdev_delay_create", 00:03:58.911 "bdev_delay_update_latency", 00:03:58.911 "bdev_zone_block_delete", 00:03:58.911 "bdev_zone_block_create", 00:03:58.911 "blobfs_create", 00:03:58.911 "blobfs_detect", 00:03:58.911 "blobfs_set_cache_size", 00:03:58.911 "bdev_aio_delete", 00:03:58.911 "bdev_aio_rescan", 00:03:58.911 "bdev_aio_create", 00:03:58.911 "bdev_ftl_set_property", 00:03:58.911 "bdev_ftl_get_properties", 00:03:58.911 "bdev_ftl_get_stats", 00:03:58.911 "bdev_ftl_unmap", 00:03:58.911 "bdev_ftl_unload", 00:03:58.911 "bdev_ftl_delete", 00:03:58.911 "bdev_ftl_load", 00:03:58.911 "bdev_ftl_create", 00:03:58.911 "bdev_virtio_attach_controller", 00:03:58.911 "bdev_virtio_scsi_get_devices", 00:03:58.911 "bdev_virtio_detach_controller", 00:03:58.911 "bdev_virtio_blk_set_hotplug", 00:03:58.911 "bdev_iscsi_delete", 00:03:58.911 "bdev_iscsi_create", 00:03:58.911 "bdev_iscsi_set_options", 00:03:58.911 "accel_error_inject_error", 00:03:58.911 "ioat_scan_accel_module", 00:03:58.911 "dsa_scan_accel_module", 00:03:58.911 "iaa_scan_accel_module", 00:03:58.911 "vfu_virtio_create_scsi_endpoint", 00:03:58.911 "vfu_virtio_scsi_remove_target", 00:03:58.911 "vfu_virtio_scsi_add_target", 00:03:58.911 "vfu_virtio_create_blk_endpoint", 00:03:58.911 "vfu_virtio_delete_endpoint", 00:03:58.911 "keyring_file_remove_key", 00:03:58.911 "keyring_file_add_key", 00:03:58.911 "iscsi_set_options", 00:03:58.911 "iscsi_get_auth_groups", 00:03:58.911 "iscsi_auth_group_remove_secret", 00:03:58.911 "iscsi_auth_group_add_secret", 00:03:58.911 "iscsi_delete_auth_group", 00:03:58.911 "iscsi_create_auth_group", 00:03:58.911 "iscsi_set_discovery_auth", 00:03:58.911 "iscsi_get_options", 00:03:58.911 "iscsi_target_node_request_logout", 00:03:58.911 "iscsi_target_node_set_redirect", 00:03:58.911 "iscsi_target_node_set_auth", 00:03:58.911 "iscsi_target_node_add_lun", 00:03:58.911 "iscsi_get_stats", 00:03:58.911 "iscsi_get_connections", 00:03:58.911 "iscsi_portal_group_set_auth", 00:03:58.911 "iscsi_start_portal_group", 00:03:58.911 "iscsi_delete_portal_group", 00:03:58.911 "iscsi_create_portal_group", 00:03:58.911 "iscsi_get_portal_groups", 00:03:58.911 "iscsi_delete_target_node", 00:03:58.911 "iscsi_target_node_remove_pg_ig_maps", 00:03:58.911 "iscsi_target_node_add_pg_ig_maps", 00:03:58.911 "iscsi_create_target_node", 00:03:58.911 "iscsi_get_target_nodes", 00:03:58.911 "iscsi_delete_initiator_group", 00:03:58.911 "iscsi_initiator_group_remove_initiators", 00:03:58.911 "iscsi_initiator_group_add_initiators", 00:03:58.911 "iscsi_create_initiator_group", 00:03:58.911 "iscsi_get_initiator_groups", 00:03:58.911 "nvmf_set_crdt", 00:03:58.911 "nvmf_set_config", 00:03:58.911 "nvmf_set_max_subsystems", 00:03:58.911 "nvmf_subsystem_get_listeners", 00:03:58.911 "nvmf_subsystem_get_qpairs", 00:03:58.911 "nvmf_subsystem_get_controllers", 00:03:58.911 "nvmf_get_stats", 00:03:58.911 "nvmf_get_transports", 00:03:58.911 "nvmf_create_transport", 00:03:58.911 "nvmf_get_targets", 00:03:58.911 "nvmf_delete_target", 00:03:58.911 "nvmf_create_target", 00:03:58.911 "nvmf_subsystem_allow_any_host", 00:03:58.911 "nvmf_subsystem_remove_host", 00:03:58.911 "nvmf_subsystem_add_host", 00:03:58.911 "nvmf_ns_remove_host", 00:03:58.911 "nvmf_ns_add_host", 00:03:58.911 "nvmf_subsystem_remove_ns", 00:03:58.911 "nvmf_subsystem_add_ns", 00:03:58.911 "nvmf_subsystem_listener_set_ana_state", 00:03:58.911 "nvmf_discovery_get_referrals", 00:03:58.911 "nvmf_discovery_remove_referral", 00:03:58.911 "nvmf_discovery_add_referral", 00:03:58.911 "nvmf_subsystem_remove_listener", 00:03:58.911 "nvmf_subsystem_add_listener", 00:03:58.911 "nvmf_delete_subsystem", 00:03:58.911 "nvmf_create_subsystem", 00:03:58.911 "nvmf_get_subsystems", 00:03:58.911 "env_dpdk_get_mem_stats", 00:03:58.911 "nbd_get_disks", 00:03:58.911 "nbd_stop_disk", 00:03:58.911 "nbd_start_disk", 00:03:58.911 "ublk_recover_disk", 00:03:58.911 "ublk_get_disks", 00:03:58.911 "ublk_stop_disk", 00:03:58.911 "ublk_start_disk", 00:03:58.911 "ublk_destroy_target", 00:03:58.911 "ublk_create_target", 00:03:58.911 "virtio_blk_create_transport", 00:03:58.911 "virtio_blk_get_transports", 00:03:58.911 "vhost_controller_set_coalescing", 00:03:58.911 "vhost_get_controllers", 00:03:58.911 "vhost_delete_controller", 00:03:58.911 "vhost_create_blk_controller", 00:03:58.911 "vhost_scsi_controller_remove_target", 00:03:58.911 "vhost_scsi_controller_add_target", 00:03:58.911 "vhost_start_scsi_controller", 00:03:58.911 "vhost_create_scsi_controller", 00:03:58.911 "thread_set_cpumask", 00:03:58.911 "framework_get_scheduler", 00:03:58.911 "framework_set_scheduler", 00:03:58.911 "framework_get_reactors", 00:03:58.911 "thread_get_io_channels", 00:03:58.911 "thread_get_pollers", 00:03:58.911 "thread_get_stats", 00:03:58.911 "framework_monitor_context_switch", 00:03:58.911 "spdk_kill_instance", 00:03:58.911 "log_enable_timestamps", 00:03:58.911 "log_get_flags", 00:03:58.911 "log_clear_flag", 00:03:58.911 "log_set_flag", 00:03:58.911 "log_get_level", 00:03:58.911 "log_set_level", 00:03:58.911 "log_get_print_level", 00:03:58.911 "log_set_print_level", 00:03:58.911 "framework_enable_cpumask_locks", 00:03:58.911 "framework_disable_cpumask_locks", 00:03:58.911 "framework_wait_init", 00:03:58.911 "framework_start_init", 00:03:58.911 "scsi_get_devices", 00:03:58.911 "bdev_get_histogram", 00:03:58.911 "bdev_enable_histogram", 00:03:58.911 "bdev_set_qos_limit", 00:03:58.911 "bdev_set_qd_sampling_period", 00:03:58.911 "bdev_get_bdevs", 00:03:58.911 "bdev_reset_iostat", 00:03:58.911 "bdev_get_iostat", 00:03:58.911 "bdev_examine", 00:03:58.911 "bdev_wait_for_examine", 00:03:58.911 "bdev_set_options", 00:03:58.911 "notify_get_notifications", 00:03:58.911 "notify_get_types", 00:03:58.911 "accel_get_stats", 00:03:58.911 "accel_set_options", 00:03:58.911 "accel_set_driver", 00:03:58.911 "accel_crypto_key_destroy", 00:03:58.911 "accel_crypto_keys_get", 00:03:58.911 "accel_crypto_key_create", 00:03:58.911 "accel_assign_opc", 00:03:58.911 "accel_get_module_info", 00:03:58.911 "accel_get_opc_assignments", 00:03:58.911 "vmd_rescan", 00:03:58.911 "vmd_remove_device", 00:03:58.911 "vmd_enable", 00:03:58.911 "sock_set_default_impl", 00:03:58.911 "sock_impl_set_options", 00:03:58.911 "sock_impl_get_options", 00:03:58.911 "iobuf_get_stats", 00:03:58.911 "iobuf_set_options", 00:03:58.911 "keyring_get_keys", 00:03:58.911 "framework_get_pci_devices", 00:03:58.911 "framework_get_config", 00:03:58.911 "framework_get_subsystems", 00:03:58.911 "vfu_tgt_set_base_path", 00:03:58.911 "trace_get_info", 00:03:58.911 "trace_get_tpoint_group_mask", 00:03:58.911 "trace_disable_tpoint_group", 00:03:58.911 "trace_enable_tpoint_group", 00:03:58.911 "trace_clear_tpoint_mask", 00:03:58.911 "trace_set_tpoint_mask", 00:03:58.911 "spdk_get_version", 00:03:58.911 "rpc_get_methods" 00:03:58.911 ] 00:03:58.911 16:51:14 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:58.911 16:51:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:58.911 16:51:14 -- common/autotest_common.sh@10 -- # set +x 00:03:58.911 16:51:14 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:58.911 16:51:14 -- spdkcli/tcp.sh@38 -- # killprocess 1571425 00:03:58.911 16:51:14 -- common/autotest_common.sh@936 -- # '[' -z 1571425 ']' 00:03:58.911 16:51:14 -- common/autotest_common.sh@940 -- # kill -0 1571425 00:03:58.911 16:51:14 -- common/autotest_common.sh@941 -- # uname 00:03:58.911 16:51:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:58.911 16:51:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1571425 00:03:58.911 16:51:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:58.911 16:51:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:58.911 16:51:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1571425' 00:03:58.911 killing process with pid 1571425 00:03:58.911 16:51:14 -- common/autotest_common.sh@955 -- # kill 1571425 00:03:58.911 16:51:14 -- common/autotest_common.sh@960 -- # wait 1571425 00:03:59.478 00:03:59.478 real 0m1.771s 00:03:59.478 user 0m3.393s 00:03:59.478 sys 0m0.454s 00:03:59.478 16:51:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.478 16:51:15 -- common/autotest_common.sh@10 -- # set +x 00:03:59.478 ************************************ 00:03:59.478 END TEST spdkcli_tcp 00:03:59.478 ************************************ 00:03:59.478 16:51:15 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:59.478 16:51:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.478 16:51:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.478 16:51:15 -- common/autotest_common.sh@10 -- # set +x 00:03:59.736 ************************************ 00:03:59.736 START TEST dpdk_mem_utility 00:03:59.736 ************************************ 00:03:59.736 16:51:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:59.736 * Looking for test storage... 00:03:59.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:59.736 16:51:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:59.736 16:51:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1571762 00:03:59.736 16:51:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.736 16:51:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1571762 00:03:59.736 16:51:15 -- common/autotest_common.sh@817 -- # '[' -z 1571762 ']' 00:03:59.736 16:51:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.736 16:51:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:59.736 16:51:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.736 16:51:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:59.736 16:51:15 -- common/autotest_common.sh@10 -- # set +x 00:03:59.736 [2024-04-18 16:51:15.290454] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:03:59.736 [2024-04-18 16:51:15.290534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571762 ] 00:03:59.736 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.736 [2024-04-18 16:51:15.358199] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.994 [2024-04-18 16:51:15.473732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.566 16:51:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:00.566 16:51:16 -- common/autotest_common.sh@850 -- # return 0 00:04:00.566 16:51:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:00.566 16:51:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:00.566 16:51:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:00.566 16:51:16 -- common/autotest_common.sh@10 -- # set +x 00:04:00.566 { 00:04:00.566 "filename": "/tmp/spdk_mem_dump.txt" 00:04:00.566 } 00:04:00.566 16:51:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:00.566 16:51:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:00.566 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:00.566 1 heaps totaling size 814.000000 MiB 00:04:00.566 size: 814.000000 MiB heap id: 0 00:04:00.566 end heaps---------- 00:04:00.566 8 mempools totaling size 598.116089 MiB 00:04:00.566 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:00.566 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:00.566 size: 84.521057 MiB name: bdev_io_1571762 00:04:00.566 size: 51.011292 MiB name: evtpool_1571762 00:04:00.566 size: 50.003479 MiB name: msgpool_1571762 00:04:00.566 size: 21.763794 MiB name: PDU_Pool 00:04:00.566 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:00.566 size: 0.026123 MiB name: Session_Pool 00:04:00.566 end mempools------- 00:04:00.566 6 memzones totaling size 4.142822 MiB 00:04:00.566 size: 1.000366 MiB name: RG_ring_0_1571762 00:04:00.566 size: 1.000366 MiB name: RG_ring_1_1571762 00:04:00.566 size: 1.000366 MiB name: RG_ring_4_1571762 00:04:00.566 size: 1.000366 MiB name: RG_ring_5_1571762 00:04:00.566 size: 0.125366 MiB name: RG_ring_2_1571762 00:04:00.566 size: 0.015991 MiB name: RG_ring_3_1571762 00:04:00.566 end memzones------- 00:04:00.824 16:51:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:00.824 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:00.824 list of free elements. size: 12.519348 MiB 00:04:00.824 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:00.824 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:00.824 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:00.824 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:00.824 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:00.824 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:00.824 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:00.824 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:00.824 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:00.824 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:00.824 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:00.824 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:00.824 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:00.824 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:00.824 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:00.824 list of standard malloc elements. size: 199.218079 MiB 00:04:00.824 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:00.824 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:00.824 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:00.824 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:00.824 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:00.824 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:00.824 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:00.824 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:00.824 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:00.824 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:00.824 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:00.824 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:00.824 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:00.824 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:00.824 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:00.824 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:00.824 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:00.824 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:00.824 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:00.824 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:00.824 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:00.824 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:00.824 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:00.824 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:00.824 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:00.824 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:00.824 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:00.824 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:00.824 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:00.824 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:00.824 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:00.824 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:00.824 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:00.824 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:00.824 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:00.824 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:00.824 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:00.824 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:00.824 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:00.824 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:00.824 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:00.824 list of memzone associated elements. size: 602.262573 MiB 00:04:00.824 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:00.824 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:00.824 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:00.824 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:00.824 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:00.824 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1571762_0 00:04:00.824 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:00.824 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1571762_0 00:04:00.824 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:00.824 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1571762_0 00:04:00.824 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:00.824 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:00.824 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:00.824 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:00.824 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:00.824 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1571762 00:04:00.824 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:00.824 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1571762 00:04:00.824 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:00.824 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1571762 00:04:00.824 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:00.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:00.824 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:00.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:00.824 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:00.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:00.824 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:00.825 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:00.825 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:00.825 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1571762 00:04:00.825 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:00.825 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1571762 00:04:00.825 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:00.825 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1571762 00:04:00.825 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:00.825 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1571762 00:04:00.825 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:00.825 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1571762 00:04:00.825 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:00.825 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:00.825 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:00.825 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:00.825 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:00.825 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:00.825 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:00.825 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1571762 00:04:00.825 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:00.825 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:00.825 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:00.825 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:00.825 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:00.825 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1571762 00:04:00.825 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:00.825 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:00.825 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:00.825 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1571762 00:04:00.825 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:00.825 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1571762 00:04:00.825 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:00.825 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:00.825 16:51:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:00.825 16:51:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1571762 00:04:00.825 16:51:16 -- common/autotest_common.sh@936 -- # '[' -z 1571762 ']' 00:04:00.825 16:51:16 -- common/autotest_common.sh@940 -- # kill -0 1571762 00:04:00.825 16:51:16 -- common/autotest_common.sh@941 -- # uname 00:04:00.825 16:51:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:00.825 16:51:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1571762 00:04:00.825 16:51:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:00.825 16:51:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:00.825 16:51:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1571762' 00:04:00.825 killing process with pid 1571762 00:04:00.825 16:51:16 -- common/autotest_common.sh@955 -- # kill 1571762 00:04:00.825 16:51:16 -- common/autotest_common.sh@960 -- # wait 1571762 00:04:01.391 00:04:01.391 real 0m1.630s 00:04:01.391 user 0m1.765s 00:04:01.391 sys 0m0.441s 00:04:01.391 16:51:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:01.391 16:51:16 -- common/autotest_common.sh@10 -- # set +x 00:04:01.391 ************************************ 00:04:01.391 END TEST dpdk_mem_utility 00:04:01.391 ************************************ 00:04:01.391 16:51:16 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:01.391 16:51:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.391 16:51:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.391 16:51:16 -- common/autotest_common.sh@10 -- # set +x 00:04:01.391 ************************************ 00:04:01.391 START TEST event 00:04:01.391 ************************************ 00:04:01.391 16:51:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:01.391 * Looking for test storage... 00:04:01.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:01.391 16:51:16 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:01.391 16:51:16 -- bdev/nbd_common.sh@6 -- # set -e 00:04:01.391 16:51:16 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:01.391 16:51:16 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:01.391 16:51:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.391 16:51:16 -- common/autotest_common.sh@10 -- # set +x 00:04:01.391 ************************************ 00:04:01.391 START TEST event_perf 00:04:01.391 ************************************ 00:04:01.391 16:51:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:01.649 Running I/O for 1 seconds...[2024-04-18 16:51:17.106807] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:01.649 [2024-04-18 16:51:17.106889] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572090 ] 00:04:01.649 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.649 [2024-04-18 16:51:17.164438] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:01.649 [2024-04-18 16:51:17.277326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.649 [2024-04-18 16:51:17.277376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:01.649 [2024-04-18 16:51:17.277494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:01.649 [2024-04-18 16:51:17.277498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.058 Running I/O for 1 seconds... 00:04:03.058 lcore 0: 240827 00:04:03.058 lcore 1: 240827 00:04:03.058 lcore 2: 240828 00:04:03.058 lcore 3: 240827 00:04:03.058 done. 00:04:03.058 00:04:03.058 real 0m1.306s 00:04:03.058 user 0m4.223s 00:04:03.058 sys 0m0.078s 00:04:03.058 16:51:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:03.058 16:51:18 -- common/autotest_common.sh@10 -- # set +x 00:04:03.058 ************************************ 00:04:03.058 END TEST event_perf 00:04:03.058 ************************************ 00:04:03.058 16:51:18 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:03.058 16:51:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:03.058 16:51:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.058 16:51:18 -- common/autotest_common.sh@10 -- # set +x 00:04:03.058 ************************************ 00:04:03.058 START TEST event_reactor 00:04:03.058 ************************************ 00:04:03.058 16:51:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:03.058 [2024-04-18 16:51:18.537879] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:03.058 [2024-04-18 16:51:18.537955] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572267 ] 00:04:03.058 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.058 [2024-04-18 16:51:18.602740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.058 [2024-04-18 16:51:18.718168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.437 test_start 00:04:04.437 oneshot 00:04:04.437 tick 100 00:04:04.437 tick 100 00:04:04.437 tick 250 00:04:04.438 tick 100 00:04:04.438 tick 100 00:04:04.438 tick 100 00:04:04.438 tick 250 00:04:04.438 tick 500 00:04:04.438 tick 100 00:04:04.438 tick 100 00:04:04.438 tick 250 00:04:04.438 tick 100 00:04:04.438 tick 100 00:04:04.438 test_end 00:04:04.438 00:04:04.438 real 0m1.317s 00:04:04.438 user 0m1.228s 00:04:04.438 sys 0m0.084s 00:04:04.438 16:51:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.438 16:51:19 -- common/autotest_common.sh@10 -- # set +x 00:04:04.438 ************************************ 00:04:04.438 END TEST event_reactor 00:04:04.438 ************************************ 00:04:04.438 16:51:19 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:04.438 16:51:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:04.438 16:51:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.438 16:51:19 -- common/autotest_common.sh@10 -- # set +x 00:04:04.438 ************************************ 00:04:04.438 START TEST event_reactor_perf 00:04:04.438 ************************************ 00:04:04.438 16:51:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:04.438 [2024-04-18 16:51:19.977714] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:04.438 [2024-04-18 16:51:19.977782] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572427 ] 00:04:04.438 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.438 [2024-04-18 16:51:20.045325] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.697 [2024-04-18 16:51:20.162142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.636 test_start 00:04:05.636 test_end 00:04:05.636 Performance: 352809 events per second 00:04:05.636 00:04:05.636 real 0m1.320s 00:04:05.636 user 0m1.229s 00:04:05.636 sys 0m0.086s 00:04:05.636 16:51:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:05.636 16:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:05.636 ************************************ 00:04:05.636 END TEST event_reactor_perf 00:04:05.636 ************************************ 00:04:05.636 16:51:21 -- event/event.sh@49 -- # uname -s 00:04:05.636 16:51:21 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:05.636 16:51:21 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:05.636 16:51:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.636 16:51:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.636 16:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:05.894 ************************************ 00:04:05.894 START TEST event_scheduler 00:04:05.894 ************************************ 00:04:05.894 16:51:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:05.894 * Looking for test storage... 00:04:05.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:05.895 16:51:21 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:05.895 16:51:21 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1572726 00:04:05.895 16:51:21 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:05.895 16:51:21 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.895 16:51:21 -- scheduler/scheduler.sh@37 -- # waitforlisten 1572726 00:04:05.895 16:51:21 -- common/autotest_common.sh@817 -- # '[' -z 1572726 ']' 00:04:05.895 16:51:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.895 16:51:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:05.895 16:51:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.895 16:51:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:05.895 16:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:05.895 [2024-04-18 16:51:21.503985] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:05.895 [2024-04-18 16:51:21.504058] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572726 ] 00:04:05.895 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.895 [2024-04-18 16:51:21.564203] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:06.153 [2024-04-18 16:51:21.671593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.153 [2024-04-18 16:51:21.671652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.153 [2024-04-18 16:51:21.671721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:06.153 [2024-04-18 16:51:21.671724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:06.153 16:51:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:06.153 16:51:21 -- common/autotest_common.sh@850 -- # return 0 00:04:06.153 16:51:21 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:06.153 16:51:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.153 16:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.153 POWER: Env isn't set yet! 00:04:06.153 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:06.153 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:06.153 POWER: Cannot get available frequencies of lcore 0 00:04:06.153 POWER: Attempting to initialise PSTAT power management... 00:04:06.153 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:06.153 POWER: Initialized successfully for lcore 0 power management 00:04:06.153 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:06.153 POWER: Initialized successfully for lcore 1 power management 00:04:06.153 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:06.153 POWER: Initialized successfully for lcore 2 power management 00:04:06.153 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:06.153 POWER: Initialized successfully for lcore 3 power management 00:04:06.153 16:51:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.153 16:51:21 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:06.153 16:51:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.153 16:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.153 [2024-04-18 16:51:21.841308] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:06.153 16:51:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.153 16:51:21 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:06.153 16:51:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.153 16:51:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.153 16:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.413 ************************************ 00:04:06.413 START TEST scheduler_create_thread 00:04:06.413 ************************************ 00:04:06.413 16:51:21 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:06.413 16:51:21 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:06.413 16:51:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.413 16:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.413 2 00:04:06.413 16:51:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.413 16:51:21 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:06.413 16:51:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.413 16:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.413 3 00:04:06.413 16:51:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.413 16:51:21 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:06.413 16:51:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.413 16:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.413 4 00:04:06.413 16:51:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.413 16:51:21 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:06.413 16:51:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.413 16:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.413 5 00:04:06.413 16:51:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.413 16:51:21 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:06.413 16:51:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.413 16:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.413 6 00:04:06.413 16:51:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.413 16:51:21 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:06.413 16:51:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.413 16:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.413 7 00:04:06.413 16:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.413 16:51:22 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:06.413 16:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.413 16:51:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.413 8 00:04:06.413 16:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.413 16:51:22 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:06.413 16:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.413 16:51:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.413 9 00:04:06.413 16:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.413 16:51:22 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:06.413 16:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.413 16:51:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.413 10 00:04:06.413 16:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.413 16:51:22 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:06.413 16:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.413 16:51:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.413 16:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.413 16:51:22 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:06.413 16:51:22 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:06.413 16:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.413 16:51:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.413 16:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.413 16:51:22 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:06.413 16:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.413 16:51:22 -- common/autotest_common.sh@10 -- # set +x 00:04:07.791 16:51:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.791 16:51:23 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:07.791 16:51:23 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:07.791 16:51:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.791 16:51:23 -- common/autotest_common.sh@10 -- # set +x 00:04:09.168 16:51:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.168 00:04:09.168 real 0m2.618s 00:04:09.168 user 0m0.012s 00:04:09.168 sys 0m0.002s 00:04:09.168 16:51:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.168 16:51:24 -- common/autotest_common.sh@10 -- # set +x 00:04:09.168 ************************************ 00:04:09.168 END TEST scheduler_create_thread 00:04:09.168 ************************************ 00:04:09.168 16:51:24 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:09.168 16:51:24 -- scheduler/scheduler.sh@46 -- # killprocess 1572726 00:04:09.168 16:51:24 -- common/autotest_common.sh@936 -- # '[' -z 1572726 ']' 00:04:09.168 16:51:24 -- common/autotest_common.sh@940 -- # kill -0 1572726 00:04:09.168 16:51:24 -- common/autotest_common.sh@941 -- # uname 00:04:09.168 16:51:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:09.168 16:51:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1572726 00:04:09.168 16:51:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:09.168 16:51:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:09.168 16:51:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1572726' 00:04:09.168 killing process with pid 1572726 00:04:09.168 16:51:24 -- common/autotest_common.sh@955 -- # kill 1572726 00:04:09.168 16:51:24 -- common/autotest_common.sh@960 -- # wait 1572726 00:04:09.426 [2024-04-18 16:51:25.040537] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:09.684 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:09.684 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:09.684 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:09.684 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:09.684 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:09.684 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:09.684 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:09.684 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:09.684 00:04:09.684 real 0m3.899s 00:04:09.684 user 0m5.856s 00:04:09.684 sys 0m0.397s 00:04:09.684 16:51:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.684 16:51:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.684 ************************************ 00:04:09.684 END TEST event_scheduler 00:04:09.684 ************************************ 00:04:09.684 16:51:25 -- event/event.sh@51 -- # modprobe -n nbd 00:04:09.684 16:51:25 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:09.684 16:51:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.684 16:51:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.684 16:51:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.943 ************************************ 00:04:09.943 START TEST app_repeat 00:04:09.943 ************************************ 00:04:09.943 16:51:25 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:09.943 16:51:25 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.943 16:51:25 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.943 16:51:25 -- event/event.sh@13 -- # local nbd_list 00:04:09.943 16:51:25 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:09.943 16:51:25 -- event/event.sh@14 -- # local bdev_list 00:04:09.943 16:51:25 -- event/event.sh@15 -- # local repeat_times=4 00:04:09.943 16:51:25 -- event/event.sh@17 -- # modprobe nbd 00:04:09.943 16:51:25 -- event/event.sh@19 -- # repeat_pid=1573211 00:04:09.943 16:51:25 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:09.943 16:51:25 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.943 16:51:25 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1573211' 00:04:09.943 Process app_repeat pid: 1573211 00:04:09.943 16:51:25 -- event/event.sh@23 -- # for i in {0..2} 00:04:09.943 16:51:25 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:09.943 spdk_app_start Round 0 00:04:09.943 16:51:25 -- event/event.sh@25 -- # waitforlisten 1573211 /var/tmp/spdk-nbd.sock 00:04:09.943 16:51:25 -- common/autotest_common.sh@817 -- # '[' -z 1573211 ']' 00:04:09.943 16:51:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:09.943 16:51:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:09.943 16:51:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:09.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:09.943 16:51:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:09.943 16:51:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.943 [2024-04-18 16:51:25.462315] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:09.943 [2024-04-18 16:51:25.462379] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573211 ] 00:04:09.943 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.943 [2024-04-18 16:51:25.525644] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:09.943 [2024-04-18 16:51:25.638911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.943 [2024-04-18 16:51:25.638917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.201 16:51:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:10.201 16:51:25 -- common/autotest_common.sh@850 -- # return 0 00:04:10.201 16:51:25 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:10.458 Malloc0 00:04:10.459 16:51:26 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:10.716 Malloc1 00:04:10.716 16:51:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@12 -- # local i 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:10.716 16:51:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:10.974 /dev/nbd0 00:04:10.974 16:51:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:10.974 16:51:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:10.974 16:51:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:10.974 16:51:26 -- common/autotest_common.sh@855 -- # local i 00:04:10.974 16:51:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:10.974 16:51:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:10.974 16:51:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:10.974 16:51:26 -- common/autotest_common.sh@859 -- # break 00:04:10.974 16:51:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:10.974 16:51:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:10.974 16:51:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:10.974 1+0 records in 00:04:10.974 1+0 records out 00:04:10.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183074 s, 22.4 MB/s 00:04:10.974 16:51:26 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:10.974 16:51:26 -- common/autotest_common.sh@872 -- # size=4096 00:04:10.974 16:51:26 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:10.974 16:51:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:10.974 16:51:26 -- common/autotest_common.sh@875 -- # return 0 00:04:10.974 16:51:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:10.974 16:51:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:10.974 16:51:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:11.232 /dev/nbd1 00:04:11.232 16:51:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:11.232 16:51:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:11.232 16:51:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:11.232 16:51:26 -- common/autotest_common.sh@855 -- # local i 00:04:11.232 16:51:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:11.232 16:51:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:11.232 16:51:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:11.232 16:51:26 -- common/autotest_common.sh@859 -- # break 00:04:11.232 16:51:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:11.232 16:51:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:11.232 16:51:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:11.232 1+0 records in 00:04:11.232 1+0 records out 00:04:11.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186553 s, 22.0 MB/s 00:04:11.232 16:51:26 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.232 16:51:26 -- common/autotest_common.sh@872 -- # size=4096 00:04:11.232 16:51:26 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.232 16:51:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:11.232 16:51:26 -- common/autotest_common.sh@875 -- # return 0 00:04:11.232 16:51:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:11.232 16:51:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.232 16:51:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:11.232 16:51:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.232 16:51:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:11.490 { 00:04:11.490 "nbd_device": "/dev/nbd0", 00:04:11.490 "bdev_name": "Malloc0" 00:04:11.490 }, 00:04:11.490 { 00:04:11.490 "nbd_device": "/dev/nbd1", 00:04:11.490 "bdev_name": "Malloc1" 00:04:11.490 } 00:04:11.490 ]' 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:11.490 { 00:04:11.490 "nbd_device": "/dev/nbd0", 00:04:11.490 "bdev_name": "Malloc0" 00:04:11.490 }, 00:04:11.490 { 00:04:11.490 "nbd_device": "/dev/nbd1", 00:04:11.490 "bdev_name": "Malloc1" 00:04:11.490 } 00:04:11.490 ]' 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:11.490 /dev/nbd1' 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:11.490 /dev/nbd1' 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@65 -- # count=2 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@95 -- # count=2 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:11.490 256+0 records in 00:04:11.490 256+0 records out 00:04:11.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498714 s, 210 MB/s 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:11.490 256+0 records in 00:04:11.490 256+0 records out 00:04:11.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277402 s, 37.8 MB/s 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:11.490 256+0 records in 00:04:11.490 256+0 records out 00:04:11.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265448 s, 39.5 MB/s 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@51 -- # local i 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:11.490 16:51:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:11.748 16:51:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:11.748 16:51:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:11.748 16:51:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:11.748 16:51:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:11.748 16:51:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:11.748 16:51:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:11.748 16:51:27 -- bdev/nbd_common.sh@41 -- # break 00:04:11.748 16:51:27 -- bdev/nbd_common.sh@45 -- # return 0 00:04:11.748 16:51:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:11.748 16:51:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:12.006 16:51:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:12.006 16:51:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:12.006 16:51:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:12.006 16:51:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:12.006 16:51:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:12.006 16:51:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:12.006 16:51:27 -- bdev/nbd_common.sh@41 -- # break 00:04:12.006 16:51:27 -- bdev/nbd_common.sh@45 -- # return 0 00:04:12.006 16:51:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:12.006 16:51:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.006 16:51:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:12.264 16:51:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:12.264 16:51:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:12.264 16:51:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:12.264 16:51:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:12.264 16:51:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:12.264 16:51:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:12.264 16:51:27 -- bdev/nbd_common.sh@65 -- # true 00:04:12.264 16:51:27 -- bdev/nbd_common.sh@65 -- # count=0 00:04:12.264 16:51:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:12.264 16:51:27 -- bdev/nbd_common.sh@104 -- # count=0 00:04:12.264 16:51:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:12.264 16:51:27 -- bdev/nbd_common.sh@109 -- # return 0 00:04:12.264 16:51:27 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:12.522 16:51:28 -- event/event.sh@35 -- # sleep 3 00:04:12.782 [2024-04-18 16:51:28.484226] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.042 [2024-04-18 16:51:28.597159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.042 [2024-04-18 16:51:28.597164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.042 [2024-04-18 16:51:28.655456] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:13.042 [2024-04-18 16:51:28.655524] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:15.579 16:51:31 -- event/event.sh@23 -- # for i in {0..2} 00:04:15.579 16:51:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:15.579 spdk_app_start Round 1 00:04:15.579 16:51:31 -- event/event.sh@25 -- # waitforlisten 1573211 /var/tmp/spdk-nbd.sock 00:04:15.580 16:51:31 -- common/autotest_common.sh@817 -- # '[' -z 1573211 ']' 00:04:15.580 16:51:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:15.580 16:51:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:15.580 16:51:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:15.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:15.580 16:51:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:15.580 16:51:31 -- common/autotest_common.sh@10 -- # set +x 00:04:15.837 16:51:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:15.837 16:51:31 -- common/autotest_common.sh@850 -- # return 0 00:04:15.837 16:51:31 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.094 Malloc0 00:04:16.094 16:51:31 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.351 Malloc1 00:04:16.351 16:51:31 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@12 -- # local i 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.351 16:51:31 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:16.608 /dev/nbd0 00:04:16.608 16:51:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:16.608 16:51:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:16.608 16:51:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:16.608 16:51:32 -- common/autotest_common.sh@855 -- # local i 00:04:16.608 16:51:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:16.608 16:51:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:16.608 16:51:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:16.608 16:51:32 -- common/autotest_common.sh@859 -- # break 00:04:16.608 16:51:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:16.608 16:51:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:16.608 16:51:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.608 1+0 records in 00:04:16.608 1+0 records out 00:04:16.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018843 s, 21.7 MB/s 00:04:16.608 16:51:32 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.608 16:51:32 -- common/autotest_common.sh@872 -- # size=4096 00:04:16.608 16:51:32 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.608 16:51:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:16.608 16:51:32 -- common/autotest_common.sh@875 -- # return 0 00:04:16.608 16:51:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.608 16:51:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.608 16:51:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:16.866 /dev/nbd1 00:04:16.866 16:51:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:16.866 16:51:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:16.866 16:51:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:16.866 16:51:32 -- common/autotest_common.sh@855 -- # local i 00:04:16.866 16:51:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:16.866 16:51:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:16.866 16:51:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:16.866 16:51:32 -- common/autotest_common.sh@859 -- # break 00:04:16.866 16:51:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:16.866 16:51:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:16.866 16:51:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.866 1+0 records in 00:04:16.866 1+0 records out 00:04:16.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184342 s, 22.2 MB/s 00:04:16.866 16:51:32 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.866 16:51:32 -- common/autotest_common.sh@872 -- # size=4096 00:04:16.866 16:51:32 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.866 16:51:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:16.866 16:51:32 -- common/autotest_common.sh@875 -- # return 0 00:04:16.866 16:51:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.866 16:51:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.866 16:51:32 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:16.866 16:51:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.866 16:51:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:17.124 { 00:04:17.124 "nbd_device": "/dev/nbd0", 00:04:17.124 "bdev_name": "Malloc0" 00:04:17.124 }, 00:04:17.124 { 00:04:17.124 "nbd_device": "/dev/nbd1", 00:04:17.124 "bdev_name": "Malloc1" 00:04:17.124 } 00:04:17.124 ]' 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:17.124 { 00:04:17.124 "nbd_device": "/dev/nbd0", 00:04:17.124 "bdev_name": "Malloc0" 00:04:17.124 }, 00:04:17.124 { 00:04:17.124 "nbd_device": "/dev/nbd1", 00:04:17.124 "bdev_name": "Malloc1" 00:04:17.124 } 00:04:17.124 ]' 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:17.124 /dev/nbd1' 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:17.124 /dev/nbd1' 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@65 -- # count=2 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@95 -- # count=2 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:17.124 16:51:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:17.382 256+0 records in 00:04:17.382 256+0 records out 00:04:17.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387028 s, 271 MB/s 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:17.382 256+0 records in 00:04:17.382 256+0 records out 00:04:17.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242877 s, 43.2 MB/s 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:17.382 256+0 records in 00:04:17.382 256+0 records out 00:04:17.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028749 s, 36.5 MB/s 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@51 -- # local i 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.382 16:51:32 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:17.642 16:51:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:17.642 16:51:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:17.642 16:51:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:17.642 16:51:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.642 16:51:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.642 16:51:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:17.642 16:51:33 -- bdev/nbd_common.sh@41 -- # break 00:04:17.642 16:51:33 -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.642 16:51:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.642 16:51:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:17.900 16:51:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:17.900 16:51:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:17.900 16:51:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:17.900 16:51:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.900 16:51:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.900 16:51:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:17.900 16:51:33 -- bdev/nbd_common.sh@41 -- # break 00:04:17.900 16:51:33 -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.900 16:51:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:17.900 16:51:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.900 16:51:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.157 16:51:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:18.157 16:51:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:18.157 16:51:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.157 16:51:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:18.157 16:51:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:18.157 16:51:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.157 16:51:33 -- bdev/nbd_common.sh@65 -- # true 00:04:18.157 16:51:33 -- bdev/nbd_common.sh@65 -- # count=0 00:04:18.157 16:51:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:18.157 16:51:33 -- bdev/nbd_common.sh@104 -- # count=0 00:04:18.157 16:51:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:18.157 16:51:33 -- bdev/nbd_common.sh@109 -- # return 0 00:04:18.157 16:51:33 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:18.416 16:51:33 -- event/event.sh@35 -- # sleep 3 00:04:18.675 [2024-04-18 16:51:34.237313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.675 [2024-04-18 16:51:34.349612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.675 [2024-04-18 16:51:34.349617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.934 [2024-04-18 16:51:34.409222] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:18.934 [2024-04-18 16:51:34.409294] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:21.468 16:51:36 -- event/event.sh@23 -- # for i in {0..2} 00:04:21.468 16:51:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:21.468 spdk_app_start Round 2 00:04:21.468 16:51:36 -- event/event.sh@25 -- # waitforlisten 1573211 /var/tmp/spdk-nbd.sock 00:04:21.468 16:51:36 -- common/autotest_common.sh@817 -- # '[' -z 1573211 ']' 00:04:21.468 16:51:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:21.468 16:51:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:21.468 16:51:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:21.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:21.468 16:51:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:21.468 16:51:36 -- common/autotest_common.sh@10 -- # set +x 00:04:21.756 16:51:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:21.756 16:51:37 -- common/autotest_common.sh@850 -- # return 0 00:04:21.756 16:51:37 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.019 Malloc0 00:04:22.019 16:51:37 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.277 Malloc1 00:04:22.277 16:51:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@12 -- # local i 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.277 16:51:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:22.535 /dev/nbd0 00:04:22.535 16:51:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:22.535 16:51:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:22.535 16:51:38 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:22.535 16:51:38 -- common/autotest_common.sh@855 -- # local i 00:04:22.535 16:51:38 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:22.535 16:51:38 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:22.535 16:51:38 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:22.535 16:51:38 -- common/autotest_common.sh@859 -- # break 00:04:22.535 16:51:38 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:22.535 16:51:38 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:22.536 16:51:38 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:22.536 1+0 records in 00:04:22.536 1+0 records out 00:04:22.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158133 s, 25.9 MB/s 00:04:22.536 16:51:38 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.536 16:51:38 -- common/autotest_common.sh@872 -- # size=4096 00:04:22.536 16:51:38 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.536 16:51:38 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:22.536 16:51:38 -- common/autotest_common.sh@875 -- # return 0 00:04:22.536 16:51:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.536 16:51:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.536 16:51:38 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:22.793 /dev/nbd1 00:04:22.794 16:51:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:22.794 16:51:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:22.794 16:51:38 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:22.794 16:51:38 -- common/autotest_common.sh@855 -- # local i 00:04:22.794 16:51:38 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:22.794 16:51:38 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:22.794 16:51:38 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:22.794 16:51:38 -- common/autotest_common.sh@859 -- # break 00:04:22.794 16:51:38 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:22.794 16:51:38 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:22.794 16:51:38 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:22.794 1+0 records in 00:04:22.794 1+0 records out 00:04:22.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226294 s, 18.1 MB/s 00:04:22.794 16:51:38 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.794 16:51:38 -- common/autotest_common.sh@872 -- # size=4096 00:04:22.794 16:51:38 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.794 16:51:38 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:22.794 16:51:38 -- common/autotest_common.sh@875 -- # return 0 00:04:22.794 16:51:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.794 16:51:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.794 16:51:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:22.794 16:51:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.794 16:51:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:23.052 { 00:04:23.052 "nbd_device": "/dev/nbd0", 00:04:23.052 "bdev_name": "Malloc0" 00:04:23.052 }, 00:04:23.052 { 00:04:23.052 "nbd_device": "/dev/nbd1", 00:04:23.052 "bdev_name": "Malloc1" 00:04:23.052 } 00:04:23.052 ]' 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:23.052 { 00:04:23.052 "nbd_device": "/dev/nbd0", 00:04:23.052 "bdev_name": "Malloc0" 00:04:23.052 }, 00:04:23.052 { 00:04:23.052 "nbd_device": "/dev/nbd1", 00:04:23.052 "bdev_name": "Malloc1" 00:04:23.052 } 00:04:23.052 ]' 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:23.052 /dev/nbd1' 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:23.052 /dev/nbd1' 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@65 -- # count=2 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@95 -- # count=2 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:23.052 256+0 records in 00:04:23.052 256+0 records out 00:04:23.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495633 s, 212 MB/s 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:23.052 256+0 records in 00:04:23.052 256+0 records out 00:04:23.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027232 s, 38.5 MB/s 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:23.052 256+0 records in 00:04:23.052 256+0 records out 00:04:23.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261775 s, 40.1 MB/s 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@51 -- # local i 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.052 16:51:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:23.311 16:51:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:23.311 16:51:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:23.311 16:51:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:23.311 16:51:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.311 16:51:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.311 16:51:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:23.311 16:51:38 -- bdev/nbd_common.sh@41 -- # break 00:04:23.311 16:51:38 -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.311 16:51:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.312 16:51:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:23.569 16:51:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:23.570 16:51:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:23.570 16:51:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:23.570 16:51:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.570 16:51:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.570 16:51:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:23.570 16:51:39 -- bdev/nbd_common.sh@41 -- # break 00:04:23.570 16:51:39 -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.570 16:51:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.570 16:51:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.570 16:51:39 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.827 16:51:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:23.827 16:51:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:23.827 16:51:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.827 16:51:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:23.827 16:51:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:23.827 16:51:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.827 16:51:39 -- bdev/nbd_common.sh@65 -- # true 00:04:23.828 16:51:39 -- bdev/nbd_common.sh@65 -- # count=0 00:04:23.828 16:51:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:23.828 16:51:39 -- bdev/nbd_common.sh@104 -- # count=0 00:04:23.828 16:51:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:23.828 16:51:39 -- bdev/nbd_common.sh@109 -- # return 0 00:04:23.828 16:51:39 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.087 16:51:39 -- event/event.sh@35 -- # sleep 3 00:04:24.346 [2024-04-18 16:51:39.998657] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.606 [2024-04-18 16:51:40.118864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.606 [2024-04-18 16:51:40.118866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.606 [2024-04-18 16:51:40.181220] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:24.606 [2024-04-18 16:51:40.181293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:27.153 16:51:42 -- event/event.sh@38 -- # waitforlisten 1573211 /var/tmp/spdk-nbd.sock 00:04:27.153 16:51:42 -- common/autotest_common.sh@817 -- # '[' -z 1573211 ']' 00:04:27.153 16:51:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:27.153 16:51:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:27.153 16:51:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:27.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:27.153 16:51:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:27.153 16:51:42 -- common/autotest_common.sh@10 -- # set +x 00:04:27.414 16:51:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:27.414 16:51:42 -- common/autotest_common.sh@850 -- # return 0 00:04:27.414 16:51:42 -- event/event.sh@39 -- # killprocess 1573211 00:04:27.414 16:51:42 -- common/autotest_common.sh@936 -- # '[' -z 1573211 ']' 00:04:27.414 16:51:42 -- common/autotest_common.sh@940 -- # kill -0 1573211 00:04:27.414 16:51:42 -- common/autotest_common.sh@941 -- # uname 00:04:27.414 16:51:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:27.414 16:51:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1573211 00:04:27.414 16:51:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:27.414 16:51:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:27.414 16:51:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1573211' 00:04:27.414 killing process with pid 1573211 00:04:27.414 16:51:42 -- common/autotest_common.sh@955 -- # kill 1573211 00:04:27.414 16:51:42 -- common/autotest_common.sh@960 -- # wait 1573211 00:04:27.672 spdk_app_start is called in Round 0. 00:04:27.672 Shutdown signal received, stop current app iteration 00:04:27.672 Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 reinitialization... 00:04:27.672 spdk_app_start is called in Round 1. 00:04:27.672 Shutdown signal received, stop current app iteration 00:04:27.672 Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 reinitialization... 00:04:27.672 spdk_app_start is called in Round 2. 00:04:27.672 Shutdown signal received, stop current app iteration 00:04:27.672 Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 reinitialization... 00:04:27.672 spdk_app_start is called in Round 3. 00:04:27.672 Shutdown signal received, stop current app iteration 00:04:27.672 16:51:43 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:27.672 16:51:43 -- event/event.sh@42 -- # return 0 00:04:27.672 00:04:27.672 real 0m17.815s 00:04:27.672 user 0m38.933s 00:04:27.672 sys 0m3.209s 00:04:27.672 16:51:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.672 16:51:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.672 ************************************ 00:04:27.672 END TEST app_repeat 00:04:27.672 ************************************ 00:04:27.672 16:51:43 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:27.672 16:51:43 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:27.672 16:51:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.672 16:51:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.672 16:51:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.930 ************************************ 00:04:27.930 START TEST cpu_locks 00:04:27.930 ************************************ 00:04:27.930 16:51:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:27.930 * Looking for test storage... 00:04:27.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:27.930 16:51:43 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:27.930 16:51:43 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:27.930 16:51:43 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:27.930 16:51:43 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:27.930 16:51:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.930 16:51:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.930 16:51:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.930 ************************************ 00:04:27.930 START TEST default_locks 00:04:27.930 ************************************ 00:04:27.930 16:51:43 -- common/autotest_common.sh@1111 -- # default_locks 00:04:27.930 16:51:43 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1575567 00:04:27.930 16:51:43 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.930 16:51:43 -- event/cpu_locks.sh@47 -- # waitforlisten 1575567 00:04:27.930 16:51:43 -- common/autotest_common.sh@817 -- # '[' -z 1575567 ']' 00:04:27.930 16:51:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.930 16:51:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:27.930 16:51:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.930 16:51:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:27.930 16:51:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.930 [2024-04-18 16:51:43.582501] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:27.930 [2024-04-18 16:51:43.582585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575567 ] 00:04:27.930 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.187 [2024-04-18 16:51:43.646467] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.187 [2024-04-18 16:51:43.759750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.123 16:51:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:29.123 16:51:44 -- common/autotest_common.sh@850 -- # return 0 00:04:29.123 16:51:44 -- event/cpu_locks.sh@49 -- # locks_exist 1575567 00:04:29.123 16:51:44 -- event/cpu_locks.sh@22 -- # lslocks -p 1575567 00:04:29.123 16:51:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:29.382 lslocks: write error 00:04:29.382 16:51:45 -- event/cpu_locks.sh@50 -- # killprocess 1575567 00:04:29.382 16:51:45 -- common/autotest_common.sh@936 -- # '[' -z 1575567 ']' 00:04:29.382 16:51:45 -- common/autotest_common.sh@940 -- # kill -0 1575567 00:04:29.382 16:51:45 -- common/autotest_common.sh@941 -- # uname 00:04:29.382 16:51:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:29.382 16:51:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1575567 00:04:29.382 16:51:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:29.382 16:51:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:29.382 16:51:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1575567' 00:04:29.382 killing process with pid 1575567 00:04:29.382 16:51:45 -- common/autotest_common.sh@955 -- # kill 1575567 00:04:29.382 16:51:45 -- common/autotest_common.sh@960 -- # wait 1575567 00:04:29.951 16:51:45 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1575567 00:04:29.951 16:51:45 -- common/autotest_common.sh@638 -- # local es=0 00:04:29.951 16:51:45 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1575567 00:04:29.951 16:51:45 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:29.951 16:51:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:29.951 16:51:45 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:29.951 16:51:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:29.951 16:51:45 -- common/autotest_common.sh@641 -- # waitforlisten 1575567 00:04:29.951 16:51:45 -- common/autotest_common.sh@817 -- # '[' -z 1575567 ']' 00:04:29.951 16:51:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.951 16:51:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:29.951 16:51:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.951 16:51:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:29.951 16:51:45 -- common/autotest_common.sh@10 -- # set +x 00:04:29.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1575567) - No such process 00:04:29.951 ERROR: process (pid: 1575567) is no longer running 00:04:29.951 16:51:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:29.951 16:51:45 -- common/autotest_common.sh@850 -- # return 1 00:04:29.951 16:51:45 -- common/autotest_common.sh@641 -- # es=1 00:04:29.951 16:51:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:29.951 16:51:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:29.951 16:51:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:29.951 16:51:45 -- event/cpu_locks.sh@54 -- # no_locks 00:04:29.951 16:51:45 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:29.951 16:51:45 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:29.951 16:51:45 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:29.951 00:04:29.951 real 0m1.956s 00:04:29.951 user 0m2.075s 00:04:29.951 sys 0m0.598s 00:04:29.951 16:51:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:29.951 16:51:45 -- common/autotest_common.sh@10 -- # set +x 00:04:29.951 ************************************ 00:04:29.951 END TEST default_locks 00:04:29.951 ************************************ 00:04:29.951 16:51:45 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:29.951 16:51:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.951 16:51:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.951 16:51:45 -- common/autotest_common.sh@10 -- # set +x 00:04:29.951 ************************************ 00:04:29.951 START TEST default_locks_via_rpc 00:04:29.951 ************************************ 00:04:29.951 16:51:45 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:04:29.951 16:51:45 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1575870 00:04:29.951 16:51:45 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.951 16:51:45 -- event/cpu_locks.sh@63 -- # waitforlisten 1575870 00:04:29.951 16:51:45 -- common/autotest_common.sh@817 -- # '[' -z 1575870 ']' 00:04:29.951 16:51:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.951 16:51:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:29.951 16:51:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.951 16:51:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:29.951 16:51:45 -- common/autotest_common.sh@10 -- # set +x 00:04:30.211 [2024-04-18 16:51:45.663149] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:30.211 [2024-04-18 16:51:45.663235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575870 ] 00:04:30.211 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.211 [2024-04-18 16:51:45.724544] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.211 [2024-04-18 16:51:45.838531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.148 16:51:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:31.148 16:51:46 -- common/autotest_common.sh@850 -- # return 0 00:04:31.148 16:51:46 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:31.148 16:51:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:31.148 16:51:46 -- common/autotest_common.sh@10 -- # set +x 00:04:31.148 16:51:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:31.148 16:51:46 -- event/cpu_locks.sh@67 -- # no_locks 00:04:31.148 16:51:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:31.148 16:51:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:31.148 16:51:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:31.148 16:51:46 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:31.148 16:51:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:31.148 16:51:46 -- common/autotest_common.sh@10 -- # set +x 00:04:31.148 16:51:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:31.148 16:51:46 -- event/cpu_locks.sh@71 -- # locks_exist 1575870 00:04:31.148 16:51:46 -- event/cpu_locks.sh@22 -- # lslocks -p 1575870 00:04:31.148 16:51:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:31.408 16:51:46 -- event/cpu_locks.sh@73 -- # killprocess 1575870 00:04:31.408 16:51:46 -- common/autotest_common.sh@936 -- # '[' -z 1575870 ']' 00:04:31.408 16:51:46 -- common/autotest_common.sh@940 -- # kill -0 1575870 00:04:31.408 16:51:46 -- common/autotest_common.sh@941 -- # uname 00:04:31.408 16:51:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:31.408 16:51:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1575870 00:04:31.408 16:51:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:31.408 16:51:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:31.408 16:51:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1575870' 00:04:31.408 killing process with pid 1575870 00:04:31.408 16:51:46 -- common/autotest_common.sh@955 -- # kill 1575870 00:04:31.408 16:51:46 -- common/autotest_common.sh@960 -- # wait 1575870 00:04:31.976 00:04:31.976 real 0m1.783s 00:04:31.976 user 0m1.898s 00:04:31.976 sys 0m0.572s 00:04:31.976 16:51:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:31.976 16:51:47 -- common/autotest_common.sh@10 -- # set +x 00:04:31.976 ************************************ 00:04:31.976 END TEST default_locks_via_rpc 00:04:31.976 ************************************ 00:04:31.976 16:51:47 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:31.976 16:51:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:31.976 16:51:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.976 16:51:47 -- common/autotest_common.sh@10 -- # set +x 00:04:31.976 ************************************ 00:04:31.976 START TEST non_locking_app_on_locked_coremask 00:04:31.976 ************************************ 00:04:31.976 16:51:47 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:04:31.976 16:51:47 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1576172 00:04:31.976 16:51:47 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.976 16:51:47 -- event/cpu_locks.sh@81 -- # waitforlisten 1576172 /var/tmp/spdk.sock 00:04:31.976 16:51:47 -- common/autotest_common.sh@817 -- # '[' -z 1576172 ']' 00:04:31.976 16:51:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.976 16:51:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:31.976 16:51:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.976 16:51:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:31.976 16:51:47 -- common/autotest_common.sh@10 -- # set +x 00:04:31.976 [2024-04-18 16:51:47.550921] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:31.976 [2024-04-18 16:51:47.551004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576172 ] 00:04:31.976 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.976 [2024-04-18 16:51:47.612176] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.236 [2024-04-18 16:51:47.726232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.803 16:51:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:32.803 16:51:48 -- common/autotest_common.sh@850 -- # return 0 00:04:32.803 16:51:48 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1576292 00:04:32.803 16:51:48 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:32.803 16:51:48 -- event/cpu_locks.sh@85 -- # waitforlisten 1576292 /var/tmp/spdk2.sock 00:04:32.803 16:51:48 -- common/autotest_common.sh@817 -- # '[' -z 1576292 ']' 00:04:32.803 16:51:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:32.803 16:51:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:32.803 16:51:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:32.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:32.803 16:51:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:32.803 16:51:48 -- common/autotest_common.sh@10 -- # set +x 00:04:33.063 [2024-04-18 16:51:48.519959] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:33.063 [2024-04-18 16:51:48.520034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576292 ] 00:04:33.063 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.063 [2024-04-18 16:51:48.601610] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:33.063 [2024-04-18 16:51:48.601649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.322 [2024-04-18 16:51:48.830933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.889 16:51:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:33.889 16:51:49 -- common/autotest_common.sh@850 -- # return 0 00:04:33.889 16:51:49 -- event/cpu_locks.sh@87 -- # locks_exist 1576172 00:04:33.889 16:51:49 -- event/cpu_locks.sh@22 -- # lslocks -p 1576172 00:04:33.889 16:51:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:34.456 lslocks: write error 00:04:34.456 16:51:49 -- event/cpu_locks.sh@89 -- # killprocess 1576172 00:04:34.456 16:51:49 -- common/autotest_common.sh@936 -- # '[' -z 1576172 ']' 00:04:34.456 16:51:49 -- common/autotest_common.sh@940 -- # kill -0 1576172 00:04:34.456 16:51:49 -- common/autotest_common.sh@941 -- # uname 00:04:34.456 16:51:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:34.456 16:51:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1576172 00:04:34.456 16:51:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:34.456 16:51:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:34.456 16:51:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1576172' 00:04:34.456 killing process with pid 1576172 00:04:34.456 16:51:49 -- common/autotest_common.sh@955 -- # kill 1576172 00:04:34.456 16:51:49 -- common/autotest_common.sh@960 -- # wait 1576172 00:04:35.393 16:51:50 -- event/cpu_locks.sh@90 -- # killprocess 1576292 00:04:35.393 16:51:50 -- common/autotest_common.sh@936 -- # '[' -z 1576292 ']' 00:04:35.393 16:51:50 -- common/autotest_common.sh@940 -- # kill -0 1576292 00:04:35.393 16:51:50 -- common/autotest_common.sh@941 -- # uname 00:04:35.393 16:51:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:35.393 16:51:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1576292 00:04:35.393 16:51:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:35.393 16:51:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:35.393 16:51:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1576292' 00:04:35.393 killing process with pid 1576292 00:04:35.393 16:51:50 -- common/autotest_common.sh@955 -- # kill 1576292 00:04:35.393 16:51:50 -- common/autotest_common.sh@960 -- # wait 1576292 00:04:35.653 00:04:35.653 real 0m3.823s 00:04:35.653 user 0m4.153s 00:04:35.653 sys 0m1.055s 00:04:35.653 16:51:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:35.653 16:51:51 -- common/autotest_common.sh@10 -- # set +x 00:04:35.653 ************************************ 00:04:35.653 END TEST non_locking_app_on_locked_coremask 00:04:35.653 ************************************ 00:04:35.653 16:51:51 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:35.653 16:51:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.653 16:51:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.653 16:51:51 -- common/autotest_common.sh@10 -- # set +x 00:04:35.912 ************************************ 00:04:35.912 START TEST locking_app_on_unlocked_coremask 00:04:35.912 ************************************ 00:04:35.912 16:51:51 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:04:35.912 16:51:51 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1576620 00:04:35.912 16:51:51 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:35.912 16:51:51 -- event/cpu_locks.sh@99 -- # waitforlisten 1576620 /var/tmp/spdk.sock 00:04:35.912 16:51:51 -- common/autotest_common.sh@817 -- # '[' -z 1576620 ']' 00:04:35.912 16:51:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.912 16:51:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:35.912 16:51:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.912 16:51:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:35.912 16:51:51 -- common/autotest_common.sh@10 -- # set +x 00:04:35.912 [2024-04-18 16:51:51.498777] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:35.912 [2024-04-18 16:51:51.498869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576620 ] 00:04:35.912 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.912 [2024-04-18 16:51:51.556886] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:35.912 [2024-04-18 16:51:51.556933] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.171 [2024-04-18 16:51:51.664122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.429 16:51:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:36.429 16:51:51 -- common/autotest_common.sh@850 -- # return 0 00:04:36.429 16:51:51 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1576753 00:04:36.429 16:51:51 -- event/cpu_locks.sh@103 -- # waitforlisten 1576753 /var/tmp/spdk2.sock 00:04:36.429 16:51:51 -- common/autotest_common.sh@817 -- # '[' -z 1576753 ']' 00:04:36.429 16:51:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:36.429 16:51:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:36.429 16:51:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:36.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:36.429 16:51:51 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:36.429 16:51:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:36.429 16:51:51 -- common/autotest_common.sh@10 -- # set +x 00:04:36.429 [2024-04-18 16:51:51.979998] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:36.429 [2024-04-18 16:51:51.980082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576753 ] 00:04:36.429 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.429 [2024-04-18 16:51:52.077423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.689 [2024-04-18 16:51:52.308899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.254 16:51:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:37.254 16:51:52 -- common/autotest_common.sh@850 -- # return 0 00:04:37.254 16:51:52 -- event/cpu_locks.sh@105 -- # locks_exist 1576753 00:04:37.254 16:51:52 -- event/cpu_locks.sh@22 -- # lslocks -p 1576753 00:04:37.254 16:51:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.820 lslocks: write error 00:04:37.820 16:51:53 -- event/cpu_locks.sh@107 -- # killprocess 1576620 00:04:37.820 16:51:53 -- common/autotest_common.sh@936 -- # '[' -z 1576620 ']' 00:04:37.820 16:51:53 -- common/autotest_common.sh@940 -- # kill -0 1576620 00:04:37.820 16:51:53 -- common/autotest_common.sh@941 -- # uname 00:04:37.820 16:51:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:37.820 16:51:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1576620 00:04:37.820 16:51:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:37.820 16:51:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:37.820 16:51:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1576620' 00:04:37.820 killing process with pid 1576620 00:04:37.820 16:51:53 -- common/autotest_common.sh@955 -- # kill 1576620 00:04:37.820 16:51:53 -- common/autotest_common.sh@960 -- # wait 1576620 00:04:38.756 16:51:54 -- event/cpu_locks.sh@108 -- # killprocess 1576753 00:04:38.756 16:51:54 -- common/autotest_common.sh@936 -- # '[' -z 1576753 ']' 00:04:38.756 16:51:54 -- common/autotest_common.sh@940 -- # kill -0 1576753 00:04:38.756 16:51:54 -- common/autotest_common.sh@941 -- # uname 00:04:38.756 16:51:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:38.756 16:51:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1576753 00:04:38.756 16:51:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:38.756 16:51:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:38.756 16:51:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1576753' 00:04:38.756 killing process with pid 1576753 00:04:38.756 16:51:54 -- common/autotest_common.sh@955 -- # kill 1576753 00:04:38.756 16:51:54 -- common/autotest_common.sh@960 -- # wait 1576753 00:04:39.323 00:04:39.323 real 0m3.308s 00:04:39.323 user 0m3.414s 00:04:39.323 sys 0m1.035s 00:04:39.323 16:51:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:39.323 16:51:54 -- common/autotest_common.sh@10 -- # set +x 00:04:39.323 ************************************ 00:04:39.323 END TEST locking_app_on_unlocked_coremask 00:04:39.323 ************************************ 00:04:39.323 16:51:54 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:39.323 16:51:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.323 16:51:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.323 16:51:54 -- common/autotest_common.sh@10 -- # set +x 00:04:39.323 ************************************ 00:04:39.323 START TEST locking_app_on_locked_coremask 00:04:39.323 ************************************ 00:04:39.323 16:51:54 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:04:39.323 16:51:54 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1577066 00:04:39.323 16:51:54 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.323 16:51:54 -- event/cpu_locks.sh@116 -- # waitforlisten 1577066 /var/tmp/spdk.sock 00:04:39.323 16:51:54 -- common/autotest_common.sh@817 -- # '[' -z 1577066 ']' 00:04:39.323 16:51:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.323 16:51:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:39.323 16:51:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.323 16:51:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:39.323 16:51:54 -- common/autotest_common.sh@10 -- # set +x 00:04:39.323 [2024-04-18 16:51:54.934255] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:39.323 [2024-04-18 16:51:54.934334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577066 ] 00:04:39.323 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.323 [2024-04-18 16:51:54.996239] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.580 [2024-04-18 16:51:55.110494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.184 16:51:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:40.184 16:51:55 -- common/autotest_common.sh@850 -- # return 0 00:04:40.184 16:51:55 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1577203 00:04:40.184 16:51:55 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:40.184 16:51:55 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1577203 /var/tmp/spdk2.sock 00:04:40.184 16:51:55 -- common/autotest_common.sh@638 -- # local es=0 00:04:40.184 16:51:55 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1577203 /var/tmp/spdk2.sock 00:04:40.184 16:51:55 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:40.184 16:51:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:40.184 16:51:55 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:40.184 16:51:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:40.184 16:51:55 -- common/autotest_common.sh@641 -- # waitforlisten 1577203 /var/tmp/spdk2.sock 00:04:40.184 16:51:55 -- common/autotest_common.sh@817 -- # '[' -z 1577203 ']' 00:04:40.184 16:51:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.184 16:51:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:40.184 16:51:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.184 16:51:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:40.184 16:51:55 -- common/autotest_common.sh@10 -- # set +x 00:04:40.442 [2024-04-18 16:51:55.917085] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:40.442 [2024-04-18 16:51:55.917165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577203 ] 00:04:40.442 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.442 [2024-04-18 16:51:56.012815] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1577066 has claimed it. 00:04:40.442 [2024-04-18 16:51:56.012873] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:41.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1577203) - No such process 00:04:41.009 ERROR: process (pid: 1577203) is no longer running 00:04:41.009 16:51:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:41.009 16:51:56 -- common/autotest_common.sh@850 -- # return 1 00:04:41.009 16:51:56 -- common/autotest_common.sh@641 -- # es=1 00:04:41.009 16:51:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:41.009 16:51:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:41.009 16:51:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:41.009 16:51:56 -- event/cpu_locks.sh@122 -- # locks_exist 1577066 00:04:41.009 16:51:56 -- event/cpu_locks.sh@22 -- # lslocks -p 1577066 00:04:41.009 16:51:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:41.576 lslocks: write error 00:04:41.576 16:51:57 -- event/cpu_locks.sh@124 -- # killprocess 1577066 00:04:41.576 16:51:57 -- common/autotest_common.sh@936 -- # '[' -z 1577066 ']' 00:04:41.576 16:51:57 -- common/autotest_common.sh@940 -- # kill -0 1577066 00:04:41.576 16:51:57 -- common/autotest_common.sh@941 -- # uname 00:04:41.576 16:51:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:41.576 16:51:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1577066 00:04:41.576 16:51:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:41.576 16:51:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:41.576 16:51:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1577066' 00:04:41.576 killing process with pid 1577066 00:04:41.576 16:51:57 -- common/autotest_common.sh@955 -- # kill 1577066 00:04:41.576 16:51:57 -- common/autotest_common.sh@960 -- # wait 1577066 00:04:41.833 00:04:41.833 real 0m2.650s 00:04:41.833 user 0m2.982s 00:04:41.833 sys 0m0.712s 00:04:41.833 16:51:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:41.833 16:51:57 -- common/autotest_common.sh@10 -- # set +x 00:04:41.833 ************************************ 00:04:41.833 END TEST locking_app_on_locked_coremask 00:04:41.833 ************************************ 00:04:42.092 16:51:57 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:42.092 16:51:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.092 16:51:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.092 16:51:57 -- common/autotest_common.sh@10 -- # set +x 00:04:42.092 ************************************ 00:04:42.092 START TEST locking_overlapped_coremask 00:04:42.092 ************************************ 00:04:42.092 16:51:57 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:04:42.092 16:51:57 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1577499 00:04:42.092 16:51:57 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:42.092 16:51:57 -- event/cpu_locks.sh@133 -- # waitforlisten 1577499 /var/tmp/spdk.sock 00:04:42.092 16:51:57 -- common/autotest_common.sh@817 -- # '[' -z 1577499 ']' 00:04:42.092 16:51:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.092 16:51:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:42.092 16:51:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.092 16:51:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:42.092 16:51:57 -- common/autotest_common.sh@10 -- # set +x 00:04:42.092 [2024-04-18 16:51:57.701824] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:42.092 [2024-04-18 16:51:57.701923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577499 ] 00:04:42.092 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.092 [2024-04-18 16:51:57.763517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:42.352 [2024-04-18 16:51:57.884842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.352 [2024-04-18 16:51:57.884909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.352 [2024-04-18 16:51:57.884912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.610 16:51:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:42.610 16:51:58 -- common/autotest_common.sh@850 -- # return 0 00:04:42.610 16:51:58 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1577510 00:04:42.610 16:51:58 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1577510 /var/tmp/spdk2.sock 00:04:42.610 16:51:58 -- common/autotest_common.sh@638 -- # local es=0 00:04:42.610 16:51:58 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1577510 /var/tmp/spdk2.sock 00:04:42.610 16:51:58 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:42.610 16:51:58 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:42.610 16:51:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:42.611 16:51:58 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:42.611 16:51:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:42.611 16:51:58 -- common/autotest_common.sh@641 -- # waitforlisten 1577510 /var/tmp/spdk2.sock 00:04:42.611 16:51:58 -- common/autotest_common.sh@817 -- # '[' -z 1577510 ']' 00:04:42.611 16:51:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:42.611 16:51:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:42.611 16:51:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:42.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:42.611 16:51:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:42.611 16:51:58 -- common/autotest_common.sh@10 -- # set +x 00:04:42.611 [2024-04-18 16:51:58.193634] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:42.611 [2024-04-18 16:51:58.193725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577510 ] 00:04:42.611 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.611 [2024-04-18 16:51:58.279871] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1577499 has claimed it. 00:04:42.611 [2024-04-18 16:51:58.279930] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:43.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1577510) - No such process 00:04:43.178 ERROR: process (pid: 1577510) is no longer running 00:04:43.178 16:51:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:43.178 16:51:58 -- common/autotest_common.sh@850 -- # return 1 00:04:43.178 16:51:58 -- common/autotest_common.sh@641 -- # es=1 00:04:43.178 16:51:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:43.178 16:51:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:43.178 16:51:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:43.178 16:51:58 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:43.178 16:51:58 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:43.178 16:51:58 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:43.178 16:51:58 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:43.178 16:51:58 -- event/cpu_locks.sh@141 -- # killprocess 1577499 00:04:43.178 16:51:58 -- common/autotest_common.sh@936 -- # '[' -z 1577499 ']' 00:04:43.178 16:51:58 -- common/autotest_common.sh@940 -- # kill -0 1577499 00:04:43.178 16:51:58 -- common/autotest_common.sh@941 -- # uname 00:04:43.178 16:51:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:43.438 16:51:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1577499 00:04:43.438 16:51:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:43.438 16:51:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:43.438 16:51:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1577499' 00:04:43.438 killing process with pid 1577499 00:04:43.438 16:51:58 -- common/autotest_common.sh@955 -- # kill 1577499 00:04:43.438 16:51:58 -- common/autotest_common.sh@960 -- # wait 1577499 00:04:43.697 00:04:43.697 real 0m1.720s 00:04:43.697 user 0m4.550s 00:04:43.697 sys 0m0.453s 00:04:43.697 16:51:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.697 16:51:59 -- common/autotest_common.sh@10 -- # set +x 00:04:43.697 ************************************ 00:04:43.697 END TEST locking_overlapped_coremask 00:04:43.697 ************************************ 00:04:43.697 16:51:59 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:43.697 16:51:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.697 16:51:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.697 16:51:59 -- common/autotest_common.sh@10 -- # set +x 00:04:43.958 ************************************ 00:04:43.958 START TEST locking_overlapped_coremask_via_rpc 00:04:43.958 ************************************ 00:04:43.958 16:51:59 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:04:43.958 16:51:59 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1577684 00:04:43.958 16:51:59 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:43.958 16:51:59 -- event/cpu_locks.sh@149 -- # waitforlisten 1577684 /var/tmp/spdk.sock 00:04:43.958 16:51:59 -- common/autotest_common.sh@817 -- # '[' -z 1577684 ']' 00:04:43.958 16:51:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.958 16:51:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:43.958 16:51:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.958 16:51:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:43.958 16:51:59 -- common/autotest_common.sh@10 -- # set +x 00:04:43.958 [2024-04-18 16:51:59.547795] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:43.958 [2024-04-18 16:51:59.547896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577684 ] 00:04:43.958 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.958 [2024-04-18 16:51:59.605432] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.958 [2024-04-18 16:51:59.605469] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:44.218 [2024-04-18 16:51:59.716029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.218 [2024-04-18 16:51:59.717406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.218 [2024-04-18 16:51:59.717419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.476 16:51:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:44.476 16:51:59 -- common/autotest_common.sh@850 -- # return 0 00:04:44.476 16:51:59 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1577818 00:04:44.477 16:51:59 -- event/cpu_locks.sh@153 -- # waitforlisten 1577818 /var/tmp/spdk2.sock 00:04:44.477 16:51:59 -- common/autotest_common.sh@817 -- # '[' -z 1577818 ']' 00:04:44.477 16:51:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.477 16:51:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:44.477 16:51:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.477 16:51:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:44.477 16:51:59 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:44.477 16:51:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.477 [2024-04-18 16:52:00.016370] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:44.477 [2024-04-18 16:52:00.016470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577818 ] 00:04:44.477 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.477 [2024-04-18 16:52:00.105692] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:44.477 [2024-04-18 16:52:00.105733] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:44.735 [2024-04-18 16:52:00.324308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.735 [2024-04-18 16:52:00.331421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:44.735 [2024-04-18 16:52:00.331424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.301 16:52:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:45.301 16:52:00 -- common/autotest_common.sh@850 -- # return 0 00:04:45.301 16:52:00 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:45.301 16:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:45.301 16:52:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.301 16:52:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:45.301 16:52:00 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:45.301 16:52:00 -- common/autotest_common.sh@638 -- # local es=0 00:04:45.301 16:52:00 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:45.301 16:52:00 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:45.301 16:52:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:45.301 16:52:00 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:45.301 16:52:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:45.301 16:52:00 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:45.301 16:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:45.301 16:52:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.301 [2024-04-18 16:52:00.981475] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1577684 has claimed it. 00:04:45.301 request: 00:04:45.301 { 00:04:45.301 "method": "framework_enable_cpumask_locks", 00:04:45.301 "req_id": 1 00:04:45.301 } 00:04:45.301 Got JSON-RPC error response 00:04:45.301 response: 00:04:45.301 { 00:04:45.301 "code": -32603, 00:04:45.301 "message": "Failed to claim CPU core: 2" 00:04:45.301 } 00:04:45.301 16:52:00 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:45.301 16:52:00 -- common/autotest_common.sh@641 -- # es=1 00:04:45.301 16:52:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:45.301 16:52:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:45.301 16:52:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:45.301 16:52:00 -- event/cpu_locks.sh@158 -- # waitforlisten 1577684 /var/tmp/spdk.sock 00:04:45.301 16:52:00 -- common/autotest_common.sh@817 -- # '[' -z 1577684 ']' 00:04:45.301 16:52:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.301 16:52:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:45.301 16:52:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.301 16:52:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:45.301 16:52:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.560 16:52:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:45.560 16:52:01 -- common/autotest_common.sh@850 -- # return 0 00:04:45.560 16:52:01 -- event/cpu_locks.sh@159 -- # waitforlisten 1577818 /var/tmp/spdk2.sock 00:04:45.560 16:52:01 -- common/autotest_common.sh@817 -- # '[' -z 1577818 ']' 00:04:45.560 16:52:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.560 16:52:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:45.560 16:52:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.560 16:52:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:45.560 16:52:01 -- common/autotest_common.sh@10 -- # set +x 00:04:45.818 16:52:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:45.818 16:52:01 -- common/autotest_common.sh@850 -- # return 0 00:04:45.818 16:52:01 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:45.818 16:52:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:45.818 16:52:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:45.818 16:52:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:45.818 00:04:45.818 real 0m1.967s 00:04:45.818 user 0m1.008s 00:04:45.818 sys 0m0.181s 00:04:45.818 16:52:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:45.818 16:52:01 -- common/autotest_common.sh@10 -- # set +x 00:04:45.818 ************************************ 00:04:45.818 END TEST locking_overlapped_coremask_via_rpc 00:04:45.818 ************************************ 00:04:45.818 16:52:01 -- event/cpu_locks.sh@174 -- # cleanup 00:04:45.818 16:52:01 -- event/cpu_locks.sh@15 -- # [[ -z 1577684 ]] 00:04:45.818 16:52:01 -- event/cpu_locks.sh@15 -- # killprocess 1577684 00:04:45.818 16:52:01 -- common/autotest_common.sh@936 -- # '[' -z 1577684 ']' 00:04:45.818 16:52:01 -- common/autotest_common.sh@940 -- # kill -0 1577684 00:04:45.819 16:52:01 -- common/autotest_common.sh@941 -- # uname 00:04:45.819 16:52:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:45.819 16:52:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1577684 00:04:45.819 16:52:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:45.819 16:52:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:45.819 16:52:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1577684' 00:04:45.819 killing process with pid 1577684 00:04:45.819 16:52:01 -- common/autotest_common.sh@955 -- # kill 1577684 00:04:45.819 16:52:01 -- common/autotest_common.sh@960 -- # wait 1577684 00:04:46.385 16:52:01 -- event/cpu_locks.sh@16 -- # [[ -z 1577818 ]] 00:04:46.385 16:52:01 -- event/cpu_locks.sh@16 -- # killprocess 1577818 00:04:46.385 16:52:01 -- common/autotest_common.sh@936 -- # '[' -z 1577818 ']' 00:04:46.385 16:52:01 -- common/autotest_common.sh@940 -- # kill -0 1577818 00:04:46.385 16:52:01 -- common/autotest_common.sh@941 -- # uname 00:04:46.385 16:52:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:46.385 16:52:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1577818 00:04:46.385 16:52:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:46.385 16:52:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:46.385 16:52:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1577818' 00:04:46.385 killing process with pid 1577818 00:04:46.385 16:52:02 -- common/autotest_common.sh@955 -- # kill 1577818 00:04:46.385 16:52:02 -- common/autotest_common.sh@960 -- # wait 1577818 00:04:46.952 16:52:02 -- event/cpu_locks.sh@18 -- # rm -f 00:04:46.952 16:52:02 -- event/cpu_locks.sh@1 -- # cleanup 00:04:46.952 16:52:02 -- event/cpu_locks.sh@15 -- # [[ -z 1577684 ]] 00:04:46.952 16:52:02 -- event/cpu_locks.sh@15 -- # killprocess 1577684 00:04:46.952 16:52:02 -- common/autotest_common.sh@936 -- # '[' -z 1577684 ']' 00:04:46.952 16:52:02 -- common/autotest_common.sh@940 -- # kill -0 1577684 00:04:46.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1577684) - No such process 00:04:46.953 16:52:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1577684 is not found' 00:04:46.953 Process with pid 1577684 is not found 00:04:46.953 16:52:02 -- event/cpu_locks.sh@16 -- # [[ -z 1577818 ]] 00:04:46.953 16:52:02 -- event/cpu_locks.sh@16 -- # killprocess 1577818 00:04:46.953 16:52:02 -- common/autotest_common.sh@936 -- # '[' -z 1577818 ']' 00:04:46.953 16:52:02 -- common/autotest_common.sh@940 -- # kill -0 1577818 00:04:46.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1577818) - No such process 00:04:46.953 16:52:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1577818 is not found' 00:04:46.953 Process with pid 1577818 is not found 00:04:46.953 16:52:02 -- event/cpu_locks.sh@18 -- # rm -f 00:04:46.953 00:04:46.953 real 0m19.060s 00:04:46.953 user 0m31.166s 00:04:46.953 sys 0m5.728s 00:04:46.953 16:52:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:46.953 16:52:02 -- common/autotest_common.sh@10 -- # set +x 00:04:46.953 ************************************ 00:04:46.953 END TEST cpu_locks 00:04:46.953 ************************************ 00:04:46.953 00:04:46.953 real 0m45.521s 00:04:46.953 user 1m22.908s 00:04:46.953 sys 0m10.058s 00:04:46.953 16:52:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:46.953 16:52:02 -- common/autotest_common.sh@10 -- # set +x 00:04:46.953 ************************************ 00:04:46.953 END TEST event 00:04:46.953 ************************************ 00:04:46.953 16:52:02 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:46.953 16:52:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.953 16:52:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.953 16:52:02 -- common/autotest_common.sh@10 -- # set +x 00:04:46.953 ************************************ 00:04:46.953 START TEST thread 00:04:46.953 ************************************ 00:04:46.953 16:52:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:46.953 * Looking for test storage... 00:04:46.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:46.953 16:52:02 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:46.953 16:52:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:46.953 16:52:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.953 16:52:02 -- common/autotest_common.sh@10 -- # set +x 00:04:47.211 ************************************ 00:04:47.211 START TEST thread_poller_perf 00:04:47.211 ************************************ 00:04:47.211 16:52:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:47.211 [2024-04-18 16:52:02.748824] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:47.211 [2024-04-18 16:52:02.748891] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578195 ] 00:04:47.211 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.211 [2024-04-18 16:52:02.811668] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.469 [2024-04-18 16:52:02.928220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.469 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:48.405 ====================================== 00:04:48.405 busy:2711962469 (cyc) 00:04:48.405 total_run_count: 294000 00:04:48.405 tsc_hz: 2700000000 (cyc) 00:04:48.405 ====================================== 00:04:48.405 poller_cost: 9224 (cyc), 3416 (nsec) 00:04:48.405 00:04:48.405 real 0m1.325s 00:04:48.405 user 0m1.237s 00:04:48.405 sys 0m0.082s 00:04:48.405 16:52:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:48.405 16:52:04 -- common/autotest_common.sh@10 -- # set +x 00:04:48.405 ************************************ 00:04:48.405 END TEST thread_poller_perf 00:04:48.405 ************************************ 00:04:48.405 16:52:04 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:48.405 16:52:04 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:48.405 16:52:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.405 16:52:04 -- common/autotest_common.sh@10 -- # set +x 00:04:48.662 ************************************ 00:04:48.662 START TEST thread_poller_perf 00:04:48.662 ************************************ 00:04:48.662 16:52:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:48.663 [2024-04-18 16:52:04.196481] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:48.663 [2024-04-18 16:52:04.196541] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578364 ] 00:04:48.663 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.663 [2024-04-18 16:52:04.261194] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.923 [2024-04-18 16:52:04.377312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.923 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:49.860 ====================================== 00:04:49.860 busy:2702911695 (cyc) 00:04:49.860 total_run_count: 3908000 00:04:49.860 tsc_hz: 2700000000 (cyc) 00:04:49.860 ====================================== 00:04:49.860 poller_cost: 691 (cyc), 255 (nsec) 00:04:49.860 00:04:49.860 real 0m1.314s 00:04:49.860 user 0m1.229s 00:04:49.860 sys 0m0.079s 00:04:49.860 16:52:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.860 16:52:05 -- common/autotest_common.sh@10 -- # set +x 00:04:49.860 ************************************ 00:04:49.860 END TEST thread_poller_perf 00:04:49.860 ************************************ 00:04:49.860 16:52:05 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:49.860 00:04:49.860 real 0m2.928s 00:04:49.860 user 0m2.581s 00:04:49.860 sys 0m0.324s 00:04:49.860 16:52:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.860 16:52:05 -- common/autotest_common.sh@10 -- # set +x 00:04:49.860 ************************************ 00:04:49.860 END TEST thread 00:04:49.860 ************************************ 00:04:49.860 16:52:05 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:49.860 16:52:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.860 16:52:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.860 16:52:05 -- common/autotest_common.sh@10 -- # set +x 00:04:50.118 ************************************ 00:04:50.118 START TEST accel 00:04:50.118 ************************************ 00:04:50.118 16:52:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:50.118 * Looking for test storage... 00:04:50.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:50.118 16:52:05 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:50.118 16:52:05 -- accel/accel.sh@82 -- # get_expected_opcs 00:04:50.118 16:52:05 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:50.119 16:52:05 -- accel/accel.sh@62 -- # spdk_tgt_pid=1578684 00:04:50.119 16:52:05 -- accel/accel.sh@63 -- # waitforlisten 1578684 00:04:50.119 16:52:05 -- common/autotest_common.sh@817 -- # '[' -z 1578684 ']' 00:04:50.119 16:52:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.119 16:52:05 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:50.119 16:52:05 -- accel/accel.sh@61 -- # build_accel_config 00:04:50.119 16:52:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:50.119 16:52:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.119 16:52:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.119 16:52:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.119 16:52:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:50.119 16:52:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.119 16:52:05 -- common/autotest_common.sh@10 -- # set +x 00:04:50.119 16:52:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.119 16:52:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.119 16:52:05 -- accel/accel.sh@40 -- # local IFS=, 00:04:50.119 16:52:05 -- accel/accel.sh@41 -- # jq -r . 00:04:50.119 [2024-04-18 16:52:05.741033] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:50.119 [2024-04-18 16:52:05.741123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578684 ] 00:04:50.119 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.119 [2024-04-18 16:52:05.797348] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.378 [2024-04-18 16:52:05.903113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.637 16:52:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:50.637 16:52:06 -- common/autotest_common.sh@850 -- # return 0 00:04:50.637 16:52:06 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:50.637 16:52:06 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:50.637 16:52:06 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:50.637 16:52:06 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:50.637 16:52:06 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:50.637 16:52:06 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:50.637 16:52:06 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:50.637 16:52:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.637 16:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:50.637 16:52:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # IFS== 00:04:50.637 16:52:06 -- accel/accel.sh@72 -- # read -r opc module 00:04:50.637 16:52:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:50.637 16:52:06 -- accel/accel.sh@75 -- # killprocess 1578684 00:04:50.637 16:52:06 -- common/autotest_common.sh@936 -- # '[' -z 1578684 ']' 00:04:50.637 16:52:06 -- common/autotest_common.sh@940 -- # kill -0 1578684 00:04:50.637 16:52:06 -- common/autotest_common.sh@941 -- # uname 00:04:50.637 16:52:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:50.637 16:52:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1578684 00:04:50.637 16:52:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:50.637 16:52:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:50.637 16:52:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1578684' 00:04:50.637 killing process with pid 1578684 00:04:50.637 16:52:06 -- common/autotest_common.sh@955 -- # kill 1578684 00:04:50.637 16:52:06 -- common/autotest_common.sh@960 -- # wait 1578684 00:04:51.205 16:52:06 -- accel/accel.sh@76 -- # trap - ERR 00:04:51.205 16:52:06 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:51.205 16:52:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:04:51.205 16:52:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.205 16:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.205 16:52:06 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:04:51.205 16:52:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:51.205 16:52:06 -- accel/accel.sh@12 -- # build_accel_config 00:04:51.206 16:52:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.206 16:52:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.206 16:52:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.206 16:52:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.206 16:52:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.206 16:52:06 -- accel/accel.sh@40 -- # local IFS=, 00:04:51.206 16:52:06 -- accel/accel.sh@41 -- # jq -r . 00:04:51.206 16:52:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.206 16:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.206 16:52:06 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:51.206 16:52:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:51.206 16:52:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.206 16:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.465 ************************************ 00:04:51.465 START TEST accel_missing_filename 00:04:51.465 ************************************ 00:04:51.465 16:52:06 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:04:51.465 16:52:06 -- common/autotest_common.sh@638 -- # local es=0 00:04:51.465 16:52:06 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:51.465 16:52:06 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:51.465 16:52:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:51.465 16:52:06 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:51.465 16:52:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:51.465 16:52:06 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:04:51.465 16:52:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:51.465 16:52:06 -- accel/accel.sh@12 -- # build_accel_config 00:04:51.465 16:52:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.465 16:52:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.465 16:52:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.465 16:52:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.465 16:52:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.465 16:52:06 -- accel/accel.sh@40 -- # local IFS=, 00:04:51.465 16:52:06 -- accel/accel.sh@41 -- # jq -r . 00:04:51.465 [2024-04-18 16:52:06.966090] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:51.465 [2024-04-18 16:52:06.966152] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578873 ] 00:04:51.465 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.465 [2024-04-18 16:52:07.028415] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.465 [2024-04-18 16:52:07.144716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.724 [2024-04-18 16:52:07.206483] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:51.724 [2024-04-18 16:52:07.295215] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:04:51.724 A filename is required. 00:04:51.724 16:52:07 -- common/autotest_common.sh@641 -- # es=234 00:04:51.724 16:52:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:51.724 16:52:07 -- common/autotest_common.sh@650 -- # es=106 00:04:51.724 16:52:07 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:51.724 16:52:07 -- common/autotest_common.sh@658 -- # es=1 00:04:51.724 16:52:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:51.724 00:04:51.724 real 0m0.473s 00:04:51.724 user 0m0.362s 00:04:51.724 sys 0m0.145s 00:04:51.724 16:52:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.724 16:52:07 -- common/autotest_common.sh@10 -- # set +x 00:04:51.724 ************************************ 00:04:51.724 END TEST accel_missing_filename 00:04:51.724 ************************************ 00:04:51.982 16:52:07 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:51.982 16:52:07 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:51.982 16:52:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.982 16:52:07 -- common/autotest_common.sh@10 -- # set +x 00:04:51.982 ************************************ 00:04:51.982 START TEST accel_compress_verify 00:04:51.982 ************************************ 00:04:51.982 16:52:07 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:51.982 16:52:07 -- common/autotest_common.sh@638 -- # local es=0 00:04:51.982 16:52:07 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:51.982 16:52:07 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:51.983 16:52:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:51.983 16:52:07 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:51.983 16:52:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:51.983 16:52:07 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:51.983 16:52:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:51.983 16:52:07 -- accel/accel.sh@12 -- # build_accel_config 00:04:51.983 16:52:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.983 16:52:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.983 16:52:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.983 16:52:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.983 16:52:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.983 16:52:07 -- accel/accel.sh@40 -- # local IFS=, 00:04:51.983 16:52:07 -- accel/accel.sh@41 -- # jq -r . 00:04:51.983 [2024-04-18 16:52:07.556587] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:51.983 [2024-04-18 16:52:07.556648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578902 ] 00:04:51.983 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.983 [2024-04-18 16:52:07.618233] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.242 [2024-04-18 16:52:07.733708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.242 [2024-04-18 16:52:07.795861] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.242 [2024-04-18 16:52:07.870014] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:04:52.501 00:04:52.501 Compression does not support the verify option, aborting. 00:04:52.501 16:52:07 -- common/autotest_common.sh@641 -- # es=161 00:04:52.501 16:52:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:52.501 16:52:07 -- common/autotest_common.sh@650 -- # es=33 00:04:52.501 16:52:07 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:52.501 16:52:07 -- common/autotest_common.sh@658 -- # es=1 00:04:52.501 16:52:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:52.501 00:04:52.501 real 0m0.457s 00:04:52.501 user 0m0.346s 00:04:52.501 sys 0m0.146s 00:04:52.501 16:52:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.501 16:52:07 -- common/autotest_common.sh@10 -- # set +x 00:04:52.501 ************************************ 00:04:52.501 END TEST accel_compress_verify 00:04:52.501 ************************************ 00:04:52.501 16:52:08 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:52.501 16:52:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:52.501 16:52:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.501 16:52:08 -- common/autotest_common.sh@10 -- # set +x 00:04:52.501 ************************************ 00:04:52.501 START TEST accel_wrong_workload 00:04:52.501 ************************************ 00:04:52.501 16:52:08 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:04:52.501 16:52:08 -- common/autotest_common.sh@638 -- # local es=0 00:04:52.501 16:52:08 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:52.501 16:52:08 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:52.501 16:52:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:52.501 16:52:08 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:52.501 16:52:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:52.501 16:52:08 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:04:52.501 16:52:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:52.501 16:52:08 -- accel/accel.sh@12 -- # build_accel_config 00:04:52.501 16:52:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.501 16:52:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.501 16:52:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.501 16:52:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.501 16:52:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.501 16:52:08 -- accel/accel.sh@40 -- # local IFS=, 00:04:52.501 16:52:08 -- accel/accel.sh@41 -- # jq -r . 00:04:52.501 Unsupported workload type: foobar 00:04:52.501 [2024-04-18 16:52:08.129306] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:52.501 accel_perf options: 00:04:52.501 [-h help message] 00:04:52.501 [-q queue depth per core] 00:04:52.501 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:52.501 [-T number of threads per core 00:04:52.501 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:52.501 [-t time in seconds] 00:04:52.501 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:52.501 [ dif_verify, , dif_generate, dif_generate_copy 00:04:52.501 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:52.501 [-l for compress/decompress workloads, name of uncompressed input file 00:04:52.501 [-S for crc32c workload, use this seed value (default 0) 00:04:52.501 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:52.501 [-f for fill workload, use this BYTE value (default 255) 00:04:52.501 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:52.501 [-y verify result if this switch is on] 00:04:52.501 [-a tasks to allocate per core (default: same value as -q)] 00:04:52.501 Can be used to spread operations across a wider range of memory. 00:04:52.501 16:52:08 -- common/autotest_common.sh@641 -- # es=1 00:04:52.501 16:52:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:52.501 16:52:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:52.501 16:52:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:52.501 00:04:52.501 real 0m0.023s 00:04:52.501 user 0m0.015s 00:04:52.501 sys 0m0.008s 00:04:52.501 16:52:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.501 16:52:08 -- common/autotest_common.sh@10 -- # set +x 00:04:52.501 ************************************ 00:04:52.501 END TEST accel_wrong_workload 00:04:52.501 ************************************ 00:04:52.501 Error: writing output failed: Broken pipe 00:04:52.501 16:52:08 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:52.501 16:52:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:52.501 16:52:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.501 16:52:08 -- common/autotest_common.sh@10 -- # set +x 00:04:52.761 ************************************ 00:04:52.761 START TEST accel_negative_buffers 00:04:52.761 ************************************ 00:04:52.761 16:52:08 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:52.761 16:52:08 -- common/autotest_common.sh@638 -- # local es=0 00:04:52.761 16:52:08 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:52.761 16:52:08 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:52.761 16:52:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:52.761 16:52:08 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:52.761 16:52:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:52.761 16:52:08 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:04:52.761 16:52:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:52.761 16:52:08 -- accel/accel.sh@12 -- # build_accel_config 00:04:52.761 16:52:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.761 16:52:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.761 16:52:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.761 16:52:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.761 16:52:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.761 16:52:08 -- accel/accel.sh@40 -- # local IFS=, 00:04:52.761 16:52:08 -- accel/accel.sh@41 -- # jq -r . 00:04:52.761 -x option must be non-negative. 00:04:52.761 [2024-04-18 16:52:08.270727] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:52.761 accel_perf options: 00:04:52.761 [-h help message] 00:04:52.761 [-q queue depth per core] 00:04:52.761 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:52.761 [-T number of threads per core 00:04:52.761 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:52.761 [-t time in seconds] 00:04:52.761 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:52.761 [ dif_verify, , dif_generate, dif_generate_copy 00:04:52.761 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:52.761 [-l for compress/decompress workloads, name of uncompressed input file 00:04:52.761 [-S for crc32c workload, use this seed value (default 0) 00:04:52.761 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:52.761 [-f for fill workload, use this BYTE value (default 255) 00:04:52.761 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:52.761 [-y verify result if this switch is on] 00:04:52.761 [-a tasks to allocate per core (default: same value as -q)] 00:04:52.761 Can be used to spread operations across a wider range of memory. 00:04:52.761 16:52:08 -- common/autotest_common.sh@641 -- # es=1 00:04:52.761 16:52:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:52.761 16:52:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:52.761 16:52:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:52.761 00:04:52.761 real 0m0.022s 00:04:52.761 user 0m0.010s 00:04:52.761 sys 0m0.012s 00:04:52.761 16:52:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.761 16:52:08 -- common/autotest_common.sh@10 -- # set +x 00:04:52.761 ************************************ 00:04:52.761 END TEST accel_negative_buffers 00:04:52.761 ************************************ 00:04:52.761 Error: writing output failed: Broken pipe 00:04:52.761 16:52:08 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:52.761 16:52:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:52.761 16:52:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.761 16:52:08 -- common/autotest_common.sh@10 -- # set +x 00:04:52.761 ************************************ 00:04:52.761 START TEST accel_crc32c 00:04:52.761 ************************************ 00:04:52.761 16:52:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:52.761 16:52:08 -- accel/accel.sh@16 -- # local accel_opc 00:04:52.761 16:52:08 -- accel/accel.sh@17 -- # local accel_module 00:04:52.761 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:52.761 16:52:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:52.761 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:52.761 16:52:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:52.761 16:52:08 -- accel/accel.sh@12 -- # build_accel_config 00:04:52.761 16:52:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.761 16:52:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.761 16:52:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.761 16:52:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.761 16:52:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.762 16:52:08 -- accel/accel.sh@40 -- # local IFS=, 00:04:52.762 16:52:08 -- accel/accel.sh@41 -- # jq -r . 00:04:52.762 [2024-04-18 16:52:08.410053] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:52.762 [2024-04-18 16:52:08.410117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579108 ] 00:04:52.762 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.021 [2024-04-18 16:52:08.472082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.021 [2024-04-18 16:52:08.588747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val= 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val= 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val=0x1 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val= 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val= 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val=crc32c 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val=32 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val= 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val=software 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@22 -- # accel_module=software 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val=32 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val=32 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val=1 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val=Yes 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val= 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:53.021 16:52:08 -- accel/accel.sh@20 -- # val= 00:04:53.021 16:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # IFS=: 00:04:53.021 16:52:08 -- accel/accel.sh@19 -- # read -r var val 00:04:54.399 16:52:09 -- accel/accel.sh@20 -- # val= 00:04:54.399 16:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # IFS=: 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # read -r var val 00:04:54.399 16:52:09 -- accel/accel.sh@20 -- # val= 00:04:54.399 16:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # IFS=: 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # read -r var val 00:04:54.399 16:52:09 -- accel/accel.sh@20 -- # val= 00:04:54.399 16:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # IFS=: 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # read -r var val 00:04:54.399 16:52:09 -- accel/accel.sh@20 -- # val= 00:04:54.399 16:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # IFS=: 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # read -r var val 00:04:54.399 16:52:09 -- accel/accel.sh@20 -- # val= 00:04:54.399 16:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # IFS=: 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # read -r var val 00:04:54.399 16:52:09 -- accel/accel.sh@20 -- # val= 00:04:54.399 16:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # IFS=: 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # read -r var val 00:04:54.399 16:52:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.399 16:52:09 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:54.399 16:52:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.399 00:04:54.399 real 0m1.475s 00:04:54.399 user 0m1.335s 00:04:54.399 sys 0m0.141s 00:04:54.399 16:52:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.399 16:52:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.399 ************************************ 00:04:54.399 END TEST accel_crc32c 00:04:54.399 ************************************ 00:04:54.399 16:52:09 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:54.399 16:52:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:54.399 16:52:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.399 16:52:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.399 ************************************ 00:04:54.399 START TEST accel_crc32c_C2 00:04:54.399 ************************************ 00:04:54.399 16:52:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:54.399 16:52:09 -- accel/accel.sh@16 -- # local accel_opc 00:04:54.399 16:52:09 -- accel/accel.sh@17 -- # local accel_module 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # IFS=: 00:04:54.399 16:52:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:54.399 16:52:09 -- accel/accel.sh@19 -- # read -r var val 00:04:54.399 16:52:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:54.399 16:52:09 -- accel/accel.sh@12 -- # build_accel_config 00:04:54.399 16:52:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.399 16:52:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.399 16:52:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.399 16:52:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.399 16:52:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.399 16:52:09 -- accel/accel.sh@40 -- # local IFS=, 00:04:54.399 16:52:09 -- accel/accel.sh@41 -- # jq -r . 00:04:54.399 [2024-04-18 16:52:10.002817] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:54.399 [2024-04-18 16:52:10.002876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579395 ] 00:04:54.399 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.399 [2024-04-18 16:52:10.066492] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.659 [2024-04-18 16:52:10.183674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val= 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val= 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val=0x1 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val= 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val= 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val=crc32c 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val=0 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val= 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val=software 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@22 -- # accel_module=software 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val=32 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val=32 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val=1 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val=Yes 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.659 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.659 16:52:10 -- accel/accel.sh@20 -- # val= 00:04:54.659 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.660 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.660 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:54.660 16:52:10 -- accel/accel.sh@20 -- # val= 00:04:54.660 16:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.660 16:52:10 -- accel/accel.sh@19 -- # IFS=: 00:04:54.660 16:52:10 -- accel/accel.sh@19 -- # read -r var val 00:04:56.041 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.041 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.041 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.041 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.041 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.041 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.041 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.041 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.041 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.041 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.041 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.041 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.041 16:52:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:56.041 16:52:11 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:56.041 16:52:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:56.041 00:04:56.041 real 0m1.467s 00:04:56.041 user 0m1.333s 00:04:56.041 sys 0m0.135s 00:04:56.041 16:52:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.041 16:52:11 -- common/autotest_common.sh@10 -- # set +x 00:04:56.041 ************************************ 00:04:56.041 END TEST accel_crc32c_C2 00:04:56.041 ************************************ 00:04:56.041 16:52:11 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:56.041 16:52:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:56.041 16:52:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.041 16:52:11 -- common/autotest_common.sh@10 -- # set +x 00:04:56.041 ************************************ 00:04:56.041 START TEST accel_copy 00:04:56.041 ************************************ 00:04:56.041 16:52:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:04:56.041 16:52:11 -- accel/accel.sh@16 -- # local accel_opc 00:04:56.041 16:52:11 -- accel/accel.sh@17 -- # local accel_module 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.041 16:52:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:56.041 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.041 16:52:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:56.041 16:52:11 -- accel/accel.sh@12 -- # build_accel_config 00:04:56.041 16:52:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.041 16:52:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.041 16:52:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.041 16:52:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.041 16:52:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.041 16:52:11 -- accel/accel.sh@40 -- # local IFS=, 00:04:56.041 16:52:11 -- accel/accel.sh@41 -- # jq -r . 00:04:56.041 [2024-04-18 16:52:11.587880] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:56.041 [2024-04-18 16:52:11.587940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579561 ] 00:04:56.041 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.041 [2024-04-18 16:52:11.648666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.302 [2024-04-18 16:52:11.764869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val=0x1 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val=copy 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@23 -- # accel_opc=copy 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val=software 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@22 -- # accel_module=software 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val=32 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val=32 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val=1 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val=Yes 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:56.302 16:52:11 -- accel/accel.sh@20 -- # val= 00:04:56.302 16:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # IFS=: 00:04:56.302 16:52:11 -- accel/accel.sh@19 -- # read -r var val 00:04:57.716 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.716 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.716 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.716 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.716 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.716 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.716 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.716 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.716 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.716 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.716 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.716 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.716 16:52:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:57.716 16:52:13 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:57.716 16:52:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:57.716 00:04:57.716 real 0m1.474s 00:04:57.716 user 0m1.332s 00:04:57.716 sys 0m0.141s 00:04:57.716 16:52:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.716 16:52:13 -- common/autotest_common.sh@10 -- # set +x 00:04:57.716 ************************************ 00:04:57.716 END TEST accel_copy 00:04:57.716 ************************************ 00:04:57.716 16:52:13 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:57.716 16:52:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:04:57.716 16:52:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.716 16:52:13 -- common/autotest_common.sh@10 -- # set +x 00:04:57.716 ************************************ 00:04:57.716 START TEST accel_fill 00:04:57.716 ************************************ 00:04:57.716 16:52:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:57.716 16:52:13 -- accel/accel.sh@16 -- # local accel_opc 00:04:57.716 16:52:13 -- accel/accel.sh@17 -- # local accel_module 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.716 16:52:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:57.716 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.716 16:52:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:57.716 16:52:13 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.716 16:52:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.716 16:52:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.716 16:52:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.716 16:52:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.716 16:52:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.716 16:52:13 -- accel/accel.sh@40 -- # local IFS=, 00:04:57.716 16:52:13 -- accel/accel.sh@41 -- # jq -r . 00:04:57.716 [2024-04-18 16:52:13.193069] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:57.716 [2024-04-18 16:52:13.193139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579726 ] 00:04:57.716 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.716 [2024-04-18 16:52:13.259285] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.716 [2024-04-18 16:52:13.374553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.975 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.975 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.975 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 16:52:13 -- accel/accel.sh@20 -- # val=0x1 00:04:57.975 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.975 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.975 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 16:52:13 -- accel/accel.sh@20 -- # val=fill 00:04:57.975 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 16:52:13 -- accel/accel.sh@23 -- # accel_opc=fill 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 16:52:13 -- accel/accel.sh@20 -- # val=0x80 00:04:57.975 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 16:52:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.975 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.975 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 16:52:13 -- accel/accel.sh@20 -- # val=software 00:04:57.975 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 16:52:13 -- accel/accel.sh@22 -- # accel_module=software 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.976 16:52:13 -- accel/accel.sh@20 -- # val=64 00:04:57.976 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.976 16:52:13 -- accel/accel.sh@20 -- # val=64 00:04:57.976 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.976 16:52:13 -- accel/accel.sh@20 -- # val=1 00:04:57.976 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.976 16:52:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.976 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.976 16:52:13 -- accel/accel.sh@20 -- # val=Yes 00:04:57.976 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.976 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.976 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:57.976 16:52:13 -- accel/accel.sh@20 -- # val= 00:04:57.976 16:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # IFS=: 00:04:57.976 16:52:13 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:14 -- accel/accel.sh@20 -- # val= 00:04:59.356 16:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:14 -- accel/accel.sh@20 -- # val= 00:04:59.356 16:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:14 -- accel/accel.sh@20 -- # val= 00:04:59.356 16:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:14 -- accel/accel.sh@20 -- # val= 00:04:59.356 16:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:14 -- accel/accel.sh@20 -- # val= 00:04:59.356 16:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:14 -- accel/accel.sh@20 -- # val= 00:04:59.356 16:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:59.356 16:52:14 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:59.356 16:52:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:59.356 00:04:59.356 real 0m1.474s 00:04:59.356 user 0m1.327s 00:04:59.356 sys 0m0.147s 00:04:59.356 16:52:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.356 16:52:14 -- common/autotest_common.sh@10 -- # set +x 00:04:59.356 ************************************ 00:04:59.356 END TEST accel_fill 00:04:59.356 ************************************ 00:04:59.356 16:52:14 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:59.356 16:52:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:59.356 16:52:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.356 16:52:14 -- common/autotest_common.sh@10 -- # set +x 00:04:59.356 ************************************ 00:04:59.356 START TEST accel_copy_crc32c 00:04:59.356 ************************************ 00:04:59.356 16:52:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:04:59.356 16:52:14 -- accel/accel.sh@16 -- # local accel_opc 00:04:59.356 16:52:14 -- accel/accel.sh@17 -- # local accel_module 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:59.356 16:52:14 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:04:59.356 16:52:14 -- accel/accel.sh@12 -- # build_accel_config 00:04:59.356 16:52:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.356 16:52:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.356 16:52:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.356 16:52:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.356 16:52:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.356 16:52:14 -- accel/accel.sh@40 -- # local IFS=, 00:04:59.356 16:52:14 -- accel/accel.sh@41 -- # jq -r . 00:04:59.356 [2024-04-18 16:52:14.768027] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:04:59.356 [2024-04-18 16:52:14.768096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580009 ] 00:04:59.356 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.356 [2024-04-18 16:52:14.824313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.356 [2024-04-18 16:52:14.940657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.356 16:52:15 -- accel/accel.sh@20 -- # val= 00:04:59.356 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:15 -- accel/accel.sh@20 -- # val= 00:04:59.356 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:15 -- accel/accel.sh@20 -- # val=0x1 00:04:59.356 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:15 -- accel/accel.sh@20 -- # val= 00:04:59.356 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:15 -- accel/accel.sh@20 -- # val= 00:04:59.356 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:15 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:59.356 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:15 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:15 -- accel/accel.sh@20 -- # val=0 00:04:59.356 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.356 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.356 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.356 16:52:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.357 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.357 16:52:15 -- accel/accel.sh@20 -- # val= 00:04:59.357 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.357 16:52:15 -- accel/accel.sh@20 -- # val=software 00:04:59.357 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.357 16:52:15 -- accel/accel.sh@22 -- # accel_module=software 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.357 16:52:15 -- accel/accel.sh@20 -- # val=32 00:04:59.357 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.357 16:52:15 -- accel/accel.sh@20 -- # val=32 00:04:59.357 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.357 16:52:15 -- accel/accel.sh@20 -- # val=1 00:04:59.357 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.357 16:52:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.357 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.357 16:52:15 -- accel/accel.sh@20 -- # val=Yes 00:04:59.357 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.357 16:52:15 -- accel/accel.sh@20 -- # val= 00:04:59.357 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:04:59.357 16:52:15 -- accel/accel.sh@20 -- # val= 00:04:59.357 16:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # IFS=: 00:04:59.357 16:52:15 -- accel/accel.sh@19 -- # read -r var val 00:05:00.735 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.735 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.735 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.735 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.735 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.735 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.735 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.735 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.735 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.735 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.735 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.735 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.735 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.735 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.735 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.735 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.735 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.735 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.735 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.735 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.735 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.735 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.735 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.735 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.735 16:52:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:00.735 16:52:16 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:00.735 16:52:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.735 00:05:00.735 real 0m1.474s 00:05:00.735 user 0m1.334s 00:05:00.735 sys 0m0.141s 00:05:00.735 16:52:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.735 16:52:16 -- common/autotest_common.sh@10 -- # set +x 00:05:00.735 ************************************ 00:05:00.735 END TEST accel_copy_crc32c 00:05:00.736 ************************************ 00:05:00.736 16:52:16 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:00.736 16:52:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:00.736 16:52:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.736 16:52:16 -- common/autotest_common.sh@10 -- # set +x 00:05:00.736 ************************************ 00:05:00.736 START TEST accel_copy_crc32c_C2 00:05:00.736 ************************************ 00:05:00.736 16:52:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:00.736 16:52:16 -- accel/accel.sh@16 -- # local accel_opc 00:05:00.736 16:52:16 -- accel/accel.sh@17 -- # local accel_module 00:05:00.736 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.736 16:52:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:00.736 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.736 16:52:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:00.736 16:52:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:00.736 16:52:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:00.736 16:52:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:00.736 16:52:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.736 16:52:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.736 16:52:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:00.736 16:52:16 -- accel/accel.sh@40 -- # local IFS=, 00:05:00.736 16:52:16 -- accel/accel.sh@41 -- # jq -r . 00:05:00.736 [2024-04-18 16:52:16.377921] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:00.736 [2024-04-18 16:52:16.377984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580173 ] 00:05:00.736 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.994 [2024-04-18 16:52:16.441916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.994 [2024-04-18 16:52:16.561293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val=0x1 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val=0 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val=software 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@22 -- # accel_module=software 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val=32 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val=32 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val=1 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val=Yes 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:00.994 16:52:16 -- accel/accel.sh@20 -- # val= 00:05:00.994 16:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # IFS=: 00:05:00.994 16:52:16 -- accel/accel.sh@19 -- # read -r var val 00:05:02.375 16:52:17 -- accel/accel.sh@20 -- # val= 00:05:02.375 16:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # IFS=: 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # read -r var val 00:05:02.375 16:52:17 -- accel/accel.sh@20 -- # val= 00:05:02.375 16:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # IFS=: 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # read -r var val 00:05:02.375 16:52:17 -- accel/accel.sh@20 -- # val= 00:05:02.375 16:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # IFS=: 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # read -r var val 00:05:02.375 16:52:17 -- accel/accel.sh@20 -- # val= 00:05:02.375 16:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # IFS=: 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # read -r var val 00:05:02.375 16:52:17 -- accel/accel.sh@20 -- # val= 00:05:02.375 16:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # IFS=: 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # read -r var val 00:05:02.375 16:52:17 -- accel/accel.sh@20 -- # val= 00:05:02.375 16:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # IFS=: 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # read -r var val 00:05:02.375 16:52:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:02.375 16:52:17 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:02.375 16:52:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.375 00:05:02.375 real 0m1.485s 00:05:02.375 user 0m1.343s 00:05:02.375 sys 0m0.143s 00:05:02.375 16:52:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.375 16:52:17 -- common/autotest_common.sh@10 -- # set +x 00:05:02.375 ************************************ 00:05:02.375 END TEST accel_copy_crc32c_C2 00:05:02.375 ************************************ 00:05:02.375 16:52:17 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:02.375 16:52:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:02.375 16:52:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.375 16:52:17 -- common/autotest_common.sh@10 -- # set +x 00:05:02.375 ************************************ 00:05:02.375 START TEST accel_dualcast 00:05:02.375 ************************************ 00:05:02.375 16:52:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:05:02.375 16:52:17 -- accel/accel.sh@16 -- # local accel_opc 00:05:02.375 16:52:17 -- accel/accel.sh@17 -- # local accel_module 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # IFS=: 00:05:02.375 16:52:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:02.375 16:52:17 -- accel/accel.sh@19 -- # read -r var val 00:05:02.375 16:52:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:02.375 16:52:17 -- accel/accel.sh@12 -- # build_accel_config 00:05:02.375 16:52:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.375 16:52:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.375 16:52:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.375 16:52:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.375 16:52:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.375 16:52:17 -- accel/accel.sh@40 -- # local IFS=, 00:05:02.375 16:52:17 -- accel/accel.sh@41 -- # jq -r . 00:05:02.375 [2024-04-18 16:52:17.974210] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:02.375 [2024-04-18 16:52:17.974275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580451 ] 00:05:02.375 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.375 [2024-04-18 16:52:18.035745] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.636 [2024-04-18 16:52:18.156282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val= 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val= 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val=0x1 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val= 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val= 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val=dualcast 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val= 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val=software 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@22 -- # accel_module=software 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val=32 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val=32 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val=1 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val=Yes 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val= 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:02.636 16:52:18 -- accel/accel.sh@20 -- # val= 00:05:02.636 16:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # IFS=: 00:05:02.636 16:52:18 -- accel/accel.sh@19 -- # read -r var val 00:05:04.015 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.015 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.015 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.015 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.015 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.015 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.015 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.015 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.015 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.015 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.015 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.015 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.015 16:52:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.015 16:52:19 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:04.015 16:52:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.015 00:05:04.015 real 0m1.471s 00:05:04.015 user 0m1.331s 00:05:04.015 sys 0m0.140s 00:05:04.015 16:52:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.015 16:52:19 -- common/autotest_common.sh@10 -- # set +x 00:05:04.015 ************************************ 00:05:04.015 END TEST accel_dualcast 00:05:04.015 ************************************ 00:05:04.015 16:52:19 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:04.015 16:52:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:04.015 16:52:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.015 16:52:19 -- common/autotest_common.sh@10 -- # set +x 00:05:04.015 ************************************ 00:05:04.015 START TEST accel_compare 00:05:04.015 ************************************ 00:05:04.015 16:52:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:05:04.015 16:52:19 -- accel/accel.sh@16 -- # local accel_opc 00:05:04.015 16:52:19 -- accel/accel.sh@17 -- # local accel_module 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.015 16:52:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:04.015 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.015 16:52:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:04.015 16:52:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.015 16:52:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.015 16:52:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.015 16:52:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.015 16:52:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.015 16:52:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.015 16:52:19 -- accel/accel.sh@40 -- # local IFS=, 00:05:04.015 16:52:19 -- accel/accel.sh@41 -- # jq -r . 00:05:04.015 [2024-04-18 16:52:19.559217] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:04.015 [2024-04-18 16:52:19.559278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580623 ] 00:05:04.015 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.015 [2024-04-18 16:52:19.620457] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.275 [2024-04-18 16:52:19.741020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.275 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.275 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val=0x1 00:05:04.275 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.275 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.275 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val=compare 00:05:04.275 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.275 16:52:19 -- accel/accel.sh@23 -- # accel_opc=compare 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.275 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.275 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val=software 00:05:04.275 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.275 16:52:19 -- accel/accel.sh@22 -- # accel_module=software 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val=32 00:05:04.275 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val=32 00:05:04.275 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val=1 00:05:04.275 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.275 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.275 16:52:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.276 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.276 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.276 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.276 16:52:19 -- accel/accel.sh@20 -- # val=Yes 00:05:04.276 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.276 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.276 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.276 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.276 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.276 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.276 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:04.276 16:52:19 -- accel/accel.sh@20 -- # val= 00:05:04.276 16:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.276 16:52:19 -- accel/accel.sh@19 -- # IFS=: 00:05:04.276 16:52:19 -- accel/accel.sh@19 -- # read -r var val 00:05:05.656 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.656 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.656 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.656 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.656 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.656 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.656 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.656 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.657 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.657 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.657 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.657 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.657 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.657 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.657 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.657 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.657 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.657 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.657 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.657 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.657 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.657 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.657 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.657 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.657 16:52:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:05.657 16:52:21 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:05.657 16:52:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.657 00:05:05.657 real 0m1.484s 00:05:05.657 user 0m1.354s 00:05:05.657 sys 0m0.130s 00:05:05.657 16:52:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.657 16:52:21 -- common/autotest_common.sh@10 -- # set +x 00:05:05.657 ************************************ 00:05:05.657 END TEST accel_compare 00:05:05.657 ************************************ 00:05:05.657 16:52:21 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:05.657 16:52:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:05.657 16:52:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.657 16:52:21 -- common/autotest_common.sh@10 -- # set +x 00:05:05.657 ************************************ 00:05:05.657 START TEST accel_xor 00:05:05.657 ************************************ 00:05:05.657 16:52:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:05:05.657 16:52:21 -- accel/accel.sh@16 -- # local accel_opc 00:05:05.657 16:52:21 -- accel/accel.sh@17 -- # local accel_module 00:05:05.657 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.657 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.657 16:52:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:05.657 16:52:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:05.657 16:52:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:05.657 16:52:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.657 16:52:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.657 16:52:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.657 16:52:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.657 16:52:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.657 16:52:21 -- accel/accel.sh@40 -- # local IFS=, 00:05:05.657 16:52:21 -- accel/accel.sh@41 -- # jq -r . 00:05:05.657 [2024-04-18 16:52:21.166862] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:05.657 [2024-04-18 16:52:21.166923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580824 ] 00:05:05.657 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.657 [2024-04-18 16:52:21.227212] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.657 [2024-04-18 16:52:21.346341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val=0x1 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val=xor 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val=2 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val=software 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@22 -- # accel_module=software 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val=32 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val=32 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val=1 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val=Yes 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:05.917 16:52:21 -- accel/accel.sh@20 -- # val= 00:05:05.917 16:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # IFS=: 00:05:05.917 16:52:21 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val= 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val= 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val= 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val= 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val= 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val= 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.339 16:52:22 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:07.339 16:52:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.339 00:05:07.339 real 0m1.471s 00:05:07.339 user 0m1.333s 00:05:07.339 sys 0m0.139s 00:05:07.339 16:52:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.339 16:52:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.339 ************************************ 00:05:07.339 END TEST accel_xor 00:05:07.339 ************************************ 00:05:07.339 16:52:22 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:07.339 16:52:22 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:07.339 16:52:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.339 16:52:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.339 ************************************ 00:05:07.339 START TEST accel_xor 00:05:07.339 ************************************ 00:05:07.339 16:52:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:05:07.339 16:52:22 -- accel/accel.sh@16 -- # local accel_opc 00:05:07.339 16:52:22 -- accel/accel.sh@17 -- # local accel_module 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:07.339 16:52:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.339 16:52:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.339 16:52:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.339 16:52:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.339 16:52:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.339 16:52:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.339 16:52:22 -- accel/accel.sh@40 -- # local IFS=, 00:05:07.339 16:52:22 -- accel/accel.sh@41 -- # jq -r . 00:05:07.339 [2024-04-18 16:52:22.754464] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:07.339 [2024-04-18 16:52:22.754530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581065 ] 00:05:07.339 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.339 [2024-04-18 16:52:22.814955] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.339 [2024-04-18 16:52:22.932458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val= 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val= 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val=0x1 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val= 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val= 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val=xor 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val=3 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val= 00:05:07.339 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.339 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.339 16:52:22 -- accel/accel.sh@20 -- # val=software 00:05:07.340 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.340 16:52:22 -- accel/accel.sh@22 -- # accel_module=software 00:05:07.340 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.340 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.340 16:52:22 -- accel/accel.sh@20 -- # val=32 00:05:07.340 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.340 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.340 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.340 16:52:22 -- accel/accel.sh@20 -- # val=32 00:05:07.340 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.340 16:52:22 -- accel/accel.sh@19 -- # IFS=: 00:05:07.340 16:52:22 -- accel/accel.sh@19 -- # read -r var val 00:05:07.340 16:52:22 -- accel/accel.sh@20 -- # val=1 00:05:07.340 16:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.340 16:52:23 -- accel/accel.sh@19 -- # IFS=: 00:05:07.340 16:52:23 -- accel/accel.sh@19 -- # read -r var val 00:05:07.340 16:52:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.340 16:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.340 16:52:23 -- accel/accel.sh@19 -- # IFS=: 00:05:07.340 16:52:23 -- accel/accel.sh@19 -- # read -r var val 00:05:07.340 16:52:23 -- accel/accel.sh@20 -- # val=Yes 00:05:07.340 16:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.340 16:52:23 -- accel/accel.sh@19 -- # IFS=: 00:05:07.340 16:52:23 -- accel/accel.sh@19 -- # read -r var val 00:05:07.340 16:52:23 -- accel/accel.sh@20 -- # val= 00:05:07.340 16:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.340 16:52:23 -- accel/accel.sh@19 -- # IFS=: 00:05:07.340 16:52:23 -- accel/accel.sh@19 -- # read -r var val 00:05:07.340 16:52:23 -- accel/accel.sh@20 -- # val= 00:05:07.340 16:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.340 16:52:23 -- accel/accel.sh@19 -- # IFS=: 00:05:07.340 16:52:23 -- accel/accel.sh@19 -- # read -r var val 00:05:08.720 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.720 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.720 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.720 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.720 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.720 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.720 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.720 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.720 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.720 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.720 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.720 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.720 16:52:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:08.720 16:52:24 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:08.720 16:52:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.720 00:05:08.720 real 0m1.481s 00:05:08.720 user 0m1.342s 00:05:08.720 sys 0m0.140s 00:05:08.720 16:52:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.720 16:52:24 -- common/autotest_common.sh@10 -- # set +x 00:05:08.720 ************************************ 00:05:08.720 END TEST accel_xor 00:05:08.720 ************************************ 00:05:08.720 16:52:24 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:08.720 16:52:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:08.720 16:52:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.720 16:52:24 -- common/autotest_common.sh@10 -- # set +x 00:05:08.720 ************************************ 00:05:08.720 START TEST accel_dif_verify 00:05:08.720 ************************************ 00:05:08.720 16:52:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:05:08.720 16:52:24 -- accel/accel.sh@16 -- # local accel_opc 00:05:08.720 16:52:24 -- accel/accel.sh@17 -- # local accel_module 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.720 16:52:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:08.720 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.720 16:52:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:08.720 16:52:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:08.720 16:52:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.720 16:52:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.720 16:52:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.720 16:52:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.720 16:52:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.720 16:52:24 -- accel/accel.sh@40 -- # local IFS=, 00:05:08.720 16:52:24 -- accel/accel.sh@41 -- # jq -r . 00:05:08.720 [2024-04-18 16:52:24.356361] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:08.720 [2024-04-18 16:52:24.356436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581235 ] 00:05:08.720 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.720 [2024-04-18 16:52:24.417748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.980 [2024-04-18 16:52:24.538582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val=0x1 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val=dif_verify 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val=software 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@22 -- # accel_module=software 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val=32 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val=32 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val=1 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val=No 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:08.980 16:52:24 -- accel/accel.sh@20 -- # val= 00:05:08.980 16:52:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # IFS=: 00:05:08.980 16:52:24 -- accel/accel.sh@19 -- # read -r var val 00:05:10.360 16:52:25 -- accel/accel.sh@20 -- # val= 00:05:10.360 16:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.360 16:52:25 -- accel/accel.sh@19 -- # IFS=: 00:05:10.360 16:52:25 -- accel/accel.sh@19 -- # read -r var val 00:05:10.360 16:52:25 -- accel/accel.sh@20 -- # val= 00:05:10.360 16:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.360 16:52:25 -- accel/accel.sh@19 -- # IFS=: 00:05:10.360 16:52:25 -- accel/accel.sh@19 -- # read -r var val 00:05:10.360 16:52:25 -- accel/accel.sh@20 -- # val= 00:05:10.360 16:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.360 16:52:25 -- accel/accel.sh@19 -- # IFS=: 00:05:10.360 16:52:25 -- accel/accel.sh@19 -- # read -r var val 00:05:10.360 16:52:25 -- accel/accel.sh@20 -- # val= 00:05:10.360 16:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.360 16:52:25 -- accel/accel.sh@19 -- # IFS=: 00:05:10.360 16:52:25 -- accel/accel.sh@19 -- # read -r var val 00:05:10.360 16:52:25 -- accel/accel.sh@20 -- # val= 00:05:10.360 16:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.360 16:52:25 -- accel/accel.sh@19 -- # IFS=: 00:05:10.360 16:52:25 -- accel/accel.sh@19 -- # read -r var val 00:05:10.360 16:52:25 -- accel/accel.sh@20 -- # val= 00:05:10.360 16:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.360 16:52:25 -- accel/accel.sh@19 -- # IFS=: 00:05:10.360 16:52:25 -- accel/accel.sh@19 -- # read -r var val 00:05:10.360 16:52:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.360 16:52:25 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:10.360 16:52:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.360 00:05:10.360 real 0m1.477s 00:05:10.360 user 0m1.341s 00:05:10.360 sys 0m0.139s 00:05:10.360 16:52:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.360 16:52:25 -- common/autotest_common.sh@10 -- # set +x 00:05:10.360 ************************************ 00:05:10.360 END TEST accel_dif_verify 00:05:10.360 ************************************ 00:05:10.360 16:52:25 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:10.360 16:52:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:10.360 16:52:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.360 16:52:25 -- common/autotest_common.sh@10 -- # set +x 00:05:10.360 ************************************ 00:05:10.360 START TEST accel_dif_generate 00:05:10.360 ************************************ 00:05:10.360 16:52:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:05:10.361 16:52:25 -- accel/accel.sh@16 -- # local accel_opc 00:05:10.361 16:52:25 -- accel/accel.sh@17 -- # local accel_module 00:05:10.361 16:52:25 -- accel/accel.sh@19 -- # IFS=: 00:05:10.361 16:52:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:10.361 16:52:25 -- accel/accel.sh@19 -- # read -r var val 00:05:10.361 16:52:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:10.361 16:52:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.361 16:52:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.361 16:52:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.361 16:52:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.361 16:52:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.361 16:52:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.361 16:52:25 -- accel/accel.sh@40 -- # local IFS=, 00:05:10.361 16:52:25 -- accel/accel.sh@41 -- # jq -r . 00:05:10.361 [2024-04-18 16:52:25.954183] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:10.361 [2024-04-18 16:52:25.954247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581509 ] 00:05:10.361 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.361 [2024-04-18 16:52:26.015080] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.619 [2024-04-18 16:52:26.135453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val= 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val= 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val=0x1 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val= 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val= 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val=dif_generate 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val= 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val=software 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@22 -- # accel_module=software 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val=32 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val=32 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val=1 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val=No 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val= 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:10.619 16:52:26 -- accel/accel.sh@20 -- # val= 00:05:10.619 16:52:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # IFS=: 00:05:10.619 16:52:26 -- accel/accel.sh@19 -- # read -r var val 00:05:12.000 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.000 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.000 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.000 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.000 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.000 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.000 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.000 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.000 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.000 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.000 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.000 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.000 16:52:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.000 16:52:27 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:12.000 16:52:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.000 00:05:12.000 real 0m1.482s 00:05:12.000 user 0m1.336s 00:05:12.000 sys 0m0.149s 00:05:12.000 16:52:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:12.000 16:52:27 -- common/autotest_common.sh@10 -- # set +x 00:05:12.000 ************************************ 00:05:12.000 END TEST accel_dif_generate 00:05:12.000 ************************************ 00:05:12.000 16:52:27 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:12.000 16:52:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:12.000 16:52:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.000 16:52:27 -- common/autotest_common.sh@10 -- # set +x 00:05:12.000 ************************************ 00:05:12.000 START TEST accel_dif_generate_copy 00:05:12.000 ************************************ 00:05:12.000 16:52:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:05:12.000 16:52:27 -- accel/accel.sh@16 -- # local accel_opc 00:05:12.000 16:52:27 -- accel/accel.sh@17 -- # local accel_module 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.000 16:52:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:12.000 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.000 16:52:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:12.000 16:52:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:12.000 16:52:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.000 16:52:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.000 16:52:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.000 16:52:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.000 16:52:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.000 16:52:27 -- accel/accel.sh@40 -- # local IFS=, 00:05:12.000 16:52:27 -- accel/accel.sh@41 -- # jq -r . 00:05:12.000 [2024-04-18 16:52:27.555000] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:12.000 [2024-04-18 16:52:27.555065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581685 ] 00:05:12.000 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.000 [2024-04-18 16:52:27.616707] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.258 [2024-04-18 16:52:27.737728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.258 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.258 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.258 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.258 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.258 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.258 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.258 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.258 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.258 16:52:27 -- accel/accel.sh@20 -- # val=0x1 00:05:12.258 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.258 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.258 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.258 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.258 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.258 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.258 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.258 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.259 16:52:27 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.259 16:52:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.259 16:52:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.259 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.259 16:52:27 -- accel/accel.sh@20 -- # val=software 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@22 -- # accel_module=software 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.259 16:52:27 -- accel/accel.sh@20 -- # val=32 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.259 16:52:27 -- accel/accel.sh@20 -- # val=32 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.259 16:52:27 -- accel/accel.sh@20 -- # val=1 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.259 16:52:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.259 16:52:27 -- accel/accel.sh@20 -- # val=No 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.259 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:12.259 16:52:27 -- accel/accel.sh@20 -- # val= 00:05:12.259 16:52:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # IFS=: 00:05:12.259 16:52:27 -- accel/accel.sh@19 -- # read -r var val 00:05:13.637 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.637 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.637 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.637 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.637 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.637 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.637 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.637 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.637 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.637 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.637 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.637 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.637 16:52:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.637 16:52:29 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:13.637 16:52:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.637 00:05:13.637 real 0m1.486s 00:05:13.637 user 0m1.345s 00:05:13.637 sys 0m0.142s 00:05:13.637 16:52:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.637 16:52:29 -- common/autotest_common.sh@10 -- # set +x 00:05:13.637 ************************************ 00:05:13.637 END TEST accel_dif_generate_copy 00:05:13.637 ************************************ 00:05:13.637 16:52:29 -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:13.637 16:52:29 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:13.637 16:52:29 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:13.637 16:52:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.637 16:52:29 -- common/autotest_common.sh@10 -- # set +x 00:05:13.637 ************************************ 00:05:13.637 START TEST accel_comp 00:05:13.637 ************************************ 00:05:13.637 16:52:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:13.637 16:52:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:13.637 16:52:29 -- accel/accel.sh@17 -- # local accel_module 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.637 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.637 16:52:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:13.637 16:52:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:13.637 16:52:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.637 16:52:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.637 16:52:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.637 16:52:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.637 16:52:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.637 16:52:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.637 16:52:29 -- accel/accel.sh@40 -- # local IFS=, 00:05:13.637 16:52:29 -- accel/accel.sh@41 -- # jq -r . 00:05:13.637 [2024-04-18 16:52:29.162525] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:13.637 [2024-04-18 16:52:29.162589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581901 ] 00:05:13.637 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.637 [2024-04-18 16:52:29.224661] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.897 [2024-04-18 16:52:29.345344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val=0x1 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val=compress 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@23 -- # accel_opc=compress 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val=software 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@22 -- # accel_module=software 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val=32 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val=32 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val=1 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val=No 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:13.897 16:52:29 -- accel/accel.sh@20 -- # val= 00:05:13.897 16:52:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # IFS=: 00:05:13.897 16:52:29 -- accel/accel.sh@19 -- # read -r var val 00:05:15.275 16:52:30 -- accel/accel.sh@20 -- # val= 00:05:15.275 16:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # IFS=: 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # read -r var val 00:05:15.275 16:52:30 -- accel/accel.sh@20 -- # val= 00:05:15.275 16:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # IFS=: 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # read -r var val 00:05:15.275 16:52:30 -- accel/accel.sh@20 -- # val= 00:05:15.275 16:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # IFS=: 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # read -r var val 00:05:15.275 16:52:30 -- accel/accel.sh@20 -- # val= 00:05:15.275 16:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # IFS=: 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # read -r var val 00:05:15.275 16:52:30 -- accel/accel.sh@20 -- # val= 00:05:15.275 16:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # IFS=: 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # read -r var val 00:05:15.275 16:52:30 -- accel/accel.sh@20 -- # val= 00:05:15.275 16:52:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # IFS=: 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # read -r var val 00:05:15.275 16:52:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.275 16:52:30 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:15.275 16:52:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.275 00:05:15.275 real 0m1.486s 00:05:15.275 user 0m1.340s 00:05:15.275 sys 0m0.147s 00:05:15.275 16:52:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.275 16:52:30 -- common/autotest_common.sh@10 -- # set +x 00:05:15.275 ************************************ 00:05:15.275 END TEST accel_comp 00:05:15.275 ************************************ 00:05:15.275 16:52:30 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.275 16:52:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:15.275 16:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.275 16:52:30 -- common/autotest_common.sh@10 -- # set +x 00:05:15.275 ************************************ 00:05:15.275 START TEST accel_decomp 00:05:15.275 ************************************ 00:05:15.275 16:52:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.275 16:52:30 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.275 16:52:30 -- accel/accel.sh@17 -- # local accel_module 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # IFS=: 00:05:15.275 16:52:30 -- accel/accel.sh@19 -- # read -r var val 00:05:15.275 16:52:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.275 16:52:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.275 16:52:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.275 16:52:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.275 16:52:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.275 16:52:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.275 16:52:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.275 16:52:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.275 16:52:30 -- accel/accel.sh@40 -- # local IFS=, 00:05:15.275 16:52:30 -- accel/accel.sh@41 -- # jq -r . 00:05:15.275 [2024-04-18 16:52:30.767714] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:15.275 [2024-04-18 16:52:30.767779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582127 ] 00:05:15.275 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.275 [2024-04-18 16:52:30.829164] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.275 [2024-04-18 16:52:30.949447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.534 16:52:31 -- accel/accel.sh@20 -- # val= 00:05:15.534 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.534 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.534 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.534 16:52:31 -- accel/accel.sh@20 -- # val= 00:05:15.534 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.534 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.534 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.534 16:52:31 -- accel/accel.sh@20 -- # val= 00:05:15.534 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.534 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.534 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.534 16:52:31 -- accel/accel.sh@20 -- # val=0x1 00:05:15.534 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.534 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.534 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.534 16:52:31 -- accel/accel.sh@20 -- # val= 00:05:15.534 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.534 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.534 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.534 16:52:31 -- accel/accel.sh@20 -- # val= 00:05:15.534 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.534 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.534 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.534 16:52:31 -- accel/accel.sh@20 -- # val=decompress 00:05:15.535 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.535 16:52:31 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.535 16:52:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.535 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.535 16:52:31 -- accel/accel.sh@20 -- # val= 00:05:15.535 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.535 16:52:31 -- accel/accel.sh@20 -- # val=software 00:05:15.535 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.535 16:52:31 -- accel/accel.sh@22 -- # accel_module=software 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.535 16:52:31 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:15.535 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.535 16:52:31 -- accel/accel.sh@20 -- # val=32 00:05:15.535 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.535 16:52:31 -- accel/accel.sh@20 -- # val=32 00:05:15.535 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.535 16:52:31 -- accel/accel.sh@20 -- # val=1 00:05:15.535 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.535 16:52:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.535 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.535 16:52:31 -- accel/accel.sh@20 -- # val=Yes 00:05:15.535 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.535 16:52:31 -- accel/accel.sh@20 -- # val= 00:05:15.535 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:15.535 16:52:31 -- accel/accel.sh@20 -- # val= 00:05:15.535 16:52:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # IFS=: 00:05:15.535 16:52:31 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:16.913 16:52:32 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:16.913 16:52:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.913 00:05:16.913 real 0m1.471s 00:05:16.913 user 0m1.330s 00:05:16.913 sys 0m0.143s 00:05:16.913 16:52:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.913 16:52:32 -- common/autotest_common.sh@10 -- # set +x 00:05:16.913 ************************************ 00:05:16.913 END TEST accel_decomp 00:05:16.913 ************************************ 00:05:16.913 16:52:32 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:16.913 16:52:32 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:16.913 16:52:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.913 16:52:32 -- common/autotest_common.sh@10 -- # set +x 00:05:16.913 ************************************ 00:05:16.913 START TEST accel_decmop_full 00:05:16.913 ************************************ 00:05:16.913 16:52:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:16.913 16:52:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:16.913 16:52:32 -- accel/accel.sh@17 -- # local accel_module 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:16.913 16:52:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:16.913 16:52:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:16.913 16:52:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.913 16:52:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.913 16:52:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.913 16:52:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.913 16:52:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.913 16:52:32 -- accel/accel.sh@40 -- # local IFS=, 00:05:16.913 16:52:32 -- accel/accel.sh@41 -- # jq -r . 00:05:16.913 [2024-04-18 16:52:32.368867] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:16.913 [2024-04-18 16:52:32.368931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582295 ] 00:05:16.913 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.913 [2024-04-18 16:52:32.432253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.913 [2024-04-18 16:52:32.552276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val=0x1 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val=decompress 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:16.913 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.913 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.913 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:16.914 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.914 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.914 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:16.914 16:52:32 -- accel/accel.sh@20 -- # val=software 00:05:16.914 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.914 16:52:32 -- accel/accel.sh@22 -- # accel_module=software 00:05:16.914 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:16.914 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:17.170 16:52:32 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:17.170 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:17.170 16:52:32 -- accel/accel.sh@20 -- # val=32 00:05:17.170 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:17.170 16:52:32 -- accel/accel.sh@20 -- # val=32 00:05:17.170 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:17.170 16:52:32 -- accel/accel.sh@20 -- # val=1 00:05:17.170 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:17.170 16:52:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.170 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:17.170 16:52:32 -- accel/accel.sh@20 -- # val=Yes 00:05:17.170 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:17.170 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:17.170 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:17.170 16:52:32 -- accel/accel.sh@20 -- # val= 00:05:17.170 16:52:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # IFS=: 00:05:17.170 16:52:32 -- accel/accel.sh@19 -- # read -r var val 00:05:18.167 16:52:33 -- accel/accel.sh@20 -- # val= 00:05:18.167 16:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.167 16:52:33 -- accel/accel.sh@19 -- # IFS=: 00:05:18.167 16:52:33 -- accel/accel.sh@19 -- # read -r var val 00:05:18.167 16:52:33 -- accel/accel.sh@20 -- # val= 00:05:18.167 16:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.167 16:52:33 -- accel/accel.sh@19 -- # IFS=: 00:05:18.167 16:52:33 -- accel/accel.sh@19 -- # read -r var val 00:05:18.167 16:52:33 -- accel/accel.sh@20 -- # val= 00:05:18.167 16:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.167 16:52:33 -- accel/accel.sh@19 -- # IFS=: 00:05:18.167 16:52:33 -- accel/accel.sh@19 -- # read -r var val 00:05:18.167 16:52:33 -- accel/accel.sh@20 -- # val= 00:05:18.167 16:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.167 16:52:33 -- accel/accel.sh@19 -- # IFS=: 00:05:18.167 16:52:33 -- accel/accel.sh@19 -- # read -r var val 00:05:18.167 16:52:33 -- accel/accel.sh@20 -- # val= 00:05:18.167 16:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.167 16:52:33 -- accel/accel.sh@19 -- # IFS=: 00:05:18.167 16:52:33 -- accel/accel.sh@19 -- # read -r var val 00:05:18.167 16:52:33 -- accel/accel.sh@20 -- # val= 00:05:18.167 16:52:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.167 16:52:33 -- accel/accel.sh@19 -- # IFS=: 00:05:18.167 16:52:33 -- accel/accel.sh@19 -- # read -r var val 00:05:18.167 16:52:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.167 16:52:33 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:18.167 16:52:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.167 00:05:18.167 real 0m1.497s 00:05:18.167 user 0m1.354s 00:05:18.167 sys 0m0.144s 00:05:18.167 16:52:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.167 16:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:18.167 ************************************ 00:05:18.167 END TEST accel_decmop_full 00:05:18.167 ************************************ 00:05:18.167 16:52:33 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:18.167 16:52:33 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:18.167 16:52:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.167 16:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:18.426 ************************************ 00:05:18.426 START TEST accel_decomp_mcore 00:05:18.426 ************************************ 00:05:18.426 16:52:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:18.426 16:52:33 -- accel/accel.sh@16 -- # local accel_opc 00:05:18.426 16:52:33 -- accel/accel.sh@17 -- # local accel_module 00:05:18.426 16:52:33 -- accel/accel.sh@19 -- # IFS=: 00:05:18.426 16:52:33 -- accel/accel.sh@19 -- # read -r var val 00:05:18.426 16:52:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:18.426 16:52:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:18.426 16:52:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.426 16:52:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.426 16:52:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.426 16:52:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.426 16:52:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.426 16:52:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.426 16:52:33 -- accel/accel.sh@40 -- # local IFS=, 00:05:18.426 16:52:33 -- accel/accel.sh@41 -- # jq -r . 00:05:18.426 [2024-04-18 16:52:33.985144] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:18.426 [2024-04-18 16:52:33.985209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582573 ] 00:05:18.426 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.426 [2024-04-18 16:52:34.046756] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.684 [2024-04-18 16:52:34.170356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.684 [2024-04-18 16:52:34.170409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.684 [2024-04-18 16:52:34.170472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.684 [2024-04-18 16:52:34.170476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val= 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val= 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val= 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val=0xf 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val= 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val= 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val=decompress 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val= 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val=software 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@22 -- # accel_module=software 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val=32 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val=32 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val=1 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val=Yes 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val= 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:18.684 16:52:34 -- accel/accel.sh@20 -- # val= 00:05:18.684 16:52:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # IFS=: 00:05:18.684 16:52:34 -- accel/accel.sh@19 -- # read -r var val 00:05:20.055 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.055 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.055 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.055 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.055 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.055 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.055 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.055 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.055 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.055 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.055 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.055 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.055 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.055 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.055 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.056 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.056 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.056 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.056 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.056 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.056 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.056 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.056 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.056 16:52:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.056 16:52:35 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:20.056 16:52:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.056 00:05:20.056 real 0m1.494s 00:05:20.056 user 0m4.815s 00:05:20.056 sys 0m0.143s 00:05:20.056 16:52:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.056 16:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.056 ************************************ 00:05:20.056 END TEST accel_decomp_mcore 00:05:20.056 ************************************ 00:05:20.056 16:52:35 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:20.056 16:52:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:20.056 16:52:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.056 16:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.056 ************************************ 00:05:20.056 START TEST accel_decomp_full_mcore 00:05:20.056 ************************************ 00:05:20.056 16:52:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:20.056 16:52:35 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.056 16:52:35 -- accel/accel.sh@17 -- # local accel_module 00:05:20.056 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.056 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.056 16:52:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:20.056 16:52:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:20.056 16:52:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.056 16:52:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.056 16:52:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.056 16:52:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.056 16:52:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.056 16:52:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.056 16:52:35 -- accel/accel.sh@40 -- # local IFS=, 00:05:20.056 16:52:35 -- accel/accel.sh@41 -- # jq -r . 00:05:20.056 [2024-04-18 16:52:35.588391] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:20.056 [2024-04-18 16:52:35.588457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582749 ] 00:05:20.056 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.056 [2024-04-18 16:52:35.646397] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.314 [2024-04-18 16:52:35.768099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.314 [2024-04-18 16:52:35.768150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.314 [2024-04-18 16:52:35.768202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.314 [2024-04-18 16:52:35.768205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val=0xf 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val=decompress 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val=software 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@22 -- # accel_module=software 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val=32 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val=32 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.314 16:52:35 -- accel/accel.sh@20 -- # val=1 00:05:20.314 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.314 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.315 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.315 16:52:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.315 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.315 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.315 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.315 16:52:35 -- accel/accel.sh@20 -- # val=Yes 00:05:20.315 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.315 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.315 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.315 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.315 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.315 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.315 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:20.315 16:52:35 -- accel/accel.sh@20 -- # val= 00:05:20.315 16:52:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.315 16:52:35 -- accel/accel.sh@19 -- # IFS=: 00:05:20.315 16:52:35 -- accel/accel.sh@19 -- # read -r var val 00:05:21.687 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.687 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.687 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.687 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.687 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.687 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.687 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.687 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.687 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.687 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.687 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.687 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.687 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.687 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.687 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.687 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.687 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.688 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.688 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.688 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.688 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.688 16:52:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.688 16:52:37 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:21.688 16:52:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.688 00:05:21.688 real 0m1.495s 00:05:21.688 user 0m4.829s 00:05:21.688 sys 0m0.158s 00:05:21.688 16:52:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.688 16:52:37 -- common/autotest_common.sh@10 -- # set +x 00:05:21.688 ************************************ 00:05:21.688 END TEST accel_decomp_full_mcore 00:05:21.688 ************************************ 00:05:21.688 16:52:37 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:21.688 16:52:37 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:21.688 16:52:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.688 16:52:37 -- common/autotest_common.sh@10 -- # set +x 00:05:21.688 ************************************ 00:05:21.688 START TEST accel_decomp_mthread 00:05:21.688 ************************************ 00:05:21.688 16:52:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:21.688 16:52:37 -- accel/accel.sh@16 -- # local accel_opc 00:05:21.688 16:52:37 -- accel/accel.sh@17 -- # local accel_module 00:05:21.688 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.688 16:52:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:21.688 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.688 16:52:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:21.688 16:52:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.688 16:52:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.688 16:52:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.688 16:52:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.688 16:52:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.688 16:52:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.688 16:52:37 -- accel/accel.sh@40 -- # local IFS=, 00:05:21.688 16:52:37 -- accel/accel.sh@41 -- # jq -r . 00:05:21.688 [2024-04-18 16:52:37.206942] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:21.688 [2024-04-18 16:52:37.207012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583028 ] 00:05:21.688 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.688 [2024-04-18 16:52:37.271452] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.688 [2024-04-18 16:52:37.391804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val=0x1 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val=decompress 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val=software 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@22 -- # accel_module=software 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val=32 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val=32 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val=2 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val=Yes 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 16:52:37 -- accel/accel.sh@20 -- # val= 00:05:21.946 16:52:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 16:52:37 -- accel/accel.sh@19 -- # read -r var val 00:05:23.318 16:52:38 -- accel/accel.sh@20 -- # val= 00:05:23.318 16:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # IFS=: 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # read -r var val 00:05:23.318 16:52:38 -- accel/accel.sh@20 -- # val= 00:05:23.318 16:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # IFS=: 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # read -r var val 00:05:23.318 16:52:38 -- accel/accel.sh@20 -- # val= 00:05:23.318 16:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # IFS=: 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # read -r var val 00:05:23.318 16:52:38 -- accel/accel.sh@20 -- # val= 00:05:23.318 16:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # IFS=: 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # read -r var val 00:05:23.318 16:52:38 -- accel/accel.sh@20 -- # val= 00:05:23.318 16:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # IFS=: 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # read -r var val 00:05:23.318 16:52:38 -- accel/accel.sh@20 -- # val= 00:05:23.318 16:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # IFS=: 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # read -r var val 00:05:23.318 16:52:38 -- accel/accel.sh@20 -- # val= 00:05:23.318 16:52:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # IFS=: 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # read -r var val 00:05:23.318 16:52:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.318 16:52:38 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:23.318 16:52:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.318 00:05:23.318 real 0m1.494s 00:05:23.318 user 0m1.356s 00:05:23.318 sys 0m0.140s 00:05:23.318 16:52:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.318 16:52:38 -- common/autotest_common.sh@10 -- # set +x 00:05:23.318 ************************************ 00:05:23.318 END TEST accel_decomp_mthread 00:05:23.318 ************************************ 00:05:23.318 16:52:38 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:23.318 16:52:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:23.318 16:52:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.318 16:52:38 -- common/autotest_common.sh@10 -- # set +x 00:05:23.318 ************************************ 00:05:23.318 START TEST accel_deomp_full_mthread 00:05:23.318 ************************************ 00:05:23.318 16:52:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:23.318 16:52:38 -- accel/accel.sh@16 -- # local accel_opc 00:05:23.318 16:52:38 -- accel/accel.sh@17 -- # local accel_module 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # IFS=: 00:05:23.318 16:52:38 -- accel/accel.sh@19 -- # read -r var val 00:05:23.318 16:52:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:23.318 16:52:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:23.318 16:52:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.318 16:52:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.318 16:52:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.318 16:52:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.318 16:52:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.318 16:52:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.318 16:52:38 -- accel/accel.sh@40 -- # local IFS=, 00:05:23.318 16:52:38 -- accel/accel.sh@41 -- # jq -r . 00:05:23.318 [2024-04-18 16:52:38.817560] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:23.318 [2024-04-18 16:52:38.817625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583197 ] 00:05:23.318 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.318 [2024-04-18 16:52:38.878908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.318 [2024-04-18 16:52:38.999255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val= 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val= 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val= 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val=0x1 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val= 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val= 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val=decompress 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val= 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val=software 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@22 -- # accel_module=software 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val=32 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val=32 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val=2 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val=Yes 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val= 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:23.577 16:52:39 -- accel/accel.sh@20 -- # val= 00:05:23.577 16:52:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # IFS=: 00:05:23.577 16:52:39 -- accel/accel.sh@19 -- # read -r var val 00:05:24.950 16:52:40 -- accel/accel.sh@20 -- # val= 00:05:24.950 16:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # IFS=: 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # read -r var val 00:05:24.950 16:52:40 -- accel/accel.sh@20 -- # val= 00:05:24.950 16:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # IFS=: 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # read -r var val 00:05:24.950 16:52:40 -- accel/accel.sh@20 -- # val= 00:05:24.950 16:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # IFS=: 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # read -r var val 00:05:24.950 16:52:40 -- accel/accel.sh@20 -- # val= 00:05:24.950 16:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # IFS=: 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # read -r var val 00:05:24.950 16:52:40 -- accel/accel.sh@20 -- # val= 00:05:24.950 16:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # IFS=: 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # read -r var val 00:05:24.950 16:52:40 -- accel/accel.sh@20 -- # val= 00:05:24.950 16:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # IFS=: 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # read -r var val 00:05:24.950 16:52:40 -- accel/accel.sh@20 -- # val= 00:05:24.950 16:52:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # IFS=: 00:05:24.950 16:52:40 -- accel/accel.sh@19 -- # read -r var val 00:05:24.950 16:52:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.950 16:52:40 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:24.950 16:52:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.950 00:05:24.950 real 0m1.508s 00:05:24.950 user 0m1.367s 00:05:24.950 sys 0m0.141s 00:05:24.950 16:52:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.950 16:52:40 -- common/autotest_common.sh@10 -- # set +x 00:05:24.950 ************************************ 00:05:24.950 END TEST accel_deomp_full_mthread 00:05:24.950 ************************************ 00:05:24.950 16:52:40 -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:24.950 16:52:40 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:24.950 16:52:40 -- accel/accel.sh@137 -- # build_accel_config 00:05:24.950 16:52:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.950 16:52:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:24.950 16:52:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.950 16:52:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.950 16:52:40 -- common/autotest_common.sh@10 -- # set +x 00:05:24.950 16:52:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.950 16:52:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.950 16:52:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.950 16:52:40 -- accel/accel.sh@40 -- # local IFS=, 00:05:24.950 16:52:40 -- accel/accel.sh@41 -- # jq -r . 00:05:24.950 ************************************ 00:05:24.950 START TEST accel_dif_functional_tests 00:05:24.950 ************************************ 00:05:24.950 16:52:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:24.950 [2024-04-18 16:52:40.479042] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:24.950 [2024-04-18 16:52:40.479120] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583366 ] 00:05:24.950 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.950 [2024-04-18 16:52:40.550510] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.208 [2024-04-18 16:52:40.672238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.208 [2024-04-18 16:52:40.672289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.208 [2024-04-18 16:52:40.672293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.208 00:05:25.208 00:05:25.208 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.208 http://cunit.sourceforge.net/ 00:05:25.208 00:05:25.208 00:05:25.208 Suite: accel_dif 00:05:25.208 Test: verify: DIF generated, GUARD check ...passed 00:05:25.208 Test: verify: DIF generated, APPTAG check ...passed 00:05:25.208 Test: verify: DIF generated, REFTAG check ...passed 00:05:25.208 Test: verify: DIF not generated, GUARD check ...[2024-04-18 16:52:40.774369] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:25.208 [2024-04-18 16:52:40.774446] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:25.208 passed 00:05:25.208 Test: verify: DIF not generated, APPTAG check ...[2024-04-18 16:52:40.774491] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:25.208 [2024-04-18 16:52:40.774521] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:25.208 passed 00:05:25.208 Test: verify: DIF not generated, REFTAG check ...[2024-04-18 16:52:40.774557] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:25.208 [2024-04-18 16:52:40.774587] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:25.208 passed 00:05:25.208 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:25.208 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-18 16:52:40.774660] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:25.208 passed 00:05:25.208 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:25.208 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:25.208 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:25.208 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-18 16:52:40.774827] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:25.208 passed 00:05:25.208 Test: generate copy: DIF generated, GUARD check ...passed 00:05:25.208 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:25.208 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:25.208 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:25.208 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:25.208 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:25.208 Test: generate copy: iovecs-len validate ...[2024-04-18 16:52:40.775087] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:25.208 passed 00:05:25.208 Test: generate copy: buffer alignment validate ...passed 00:05:25.208 00:05:25.208 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.208 suites 1 1 n/a 0 0 00:05:25.208 tests 20 20 20 0 0 00:05:25.208 asserts 204 204 204 0 n/a 00:05:25.208 00:05:25.208 Elapsed time = 0.003 seconds 00:05:25.466 00:05:25.466 real 0m0.609s 00:05:25.466 user 0m0.869s 00:05:25.466 sys 0m0.189s 00:05:25.466 16:52:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:25.466 16:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:25.466 ************************************ 00:05:25.466 END TEST accel_dif_functional_tests 00:05:25.466 ************************************ 00:05:25.466 00:05:25.466 real 0m35.437s 00:05:25.466 user 0m37.660s 00:05:25.466 sys 0m5.515s 00:05:25.466 16:52:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:25.466 16:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:25.466 ************************************ 00:05:25.466 END TEST accel 00:05:25.466 ************************************ 00:05:25.466 16:52:41 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:25.466 16:52:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.466 16:52:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.466 16:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:25.724 ************************************ 00:05:25.724 START TEST accel_rpc 00:05:25.724 ************************************ 00:05:25.724 16:52:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:25.724 * Looking for test storage... 00:05:25.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:25.724 16:52:41 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:25.724 16:52:41 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1583560 00:05:25.725 16:52:41 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:25.725 16:52:41 -- accel/accel_rpc.sh@15 -- # waitforlisten 1583560 00:05:25.725 16:52:41 -- common/autotest_common.sh@817 -- # '[' -z 1583560 ']' 00:05:25.725 16:52:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.725 16:52:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.725 16:52:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.725 16:52:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.725 16:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:25.725 [2024-04-18 16:52:41.308237] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:25.725 [2024-04-18 16:52:41.308320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583560 ] 00:05:25.725 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.725 [2024-04-18 16:52:41.370621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.983 [2024-04-18 16:52:41.480159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.914 16:52:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:26.914 16:52:42 -- common/autotest_common.sh@850 -- # return 0 00:05:26.914 16:52:42 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:26.914 16:52:42 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:26.914 16:52:42 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:26.914 16:52:42 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:26.914 16:52:42 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:26.914 16:52:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.914 16:52:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.914 16:52:42 -- common/autotest_common.sh@10 -- # set +x 00:05:26.914 ************************************ 00:05:26.914 START TEST accel_assign_opcode 00:05:26.914 ************************************ 00:05:26.914 16:52:42 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:05:26.914 16:52:42 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:26.914 16:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.914 16:52:42 -- common/autotest_common.sh@10 -- # set +x 00:05:26.914 [2024-04-18 16:52:42.374916] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:26.914 16:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.914 16:52:42 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:26.914 16:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.914 16:52:42 -- common/autotest_common.sh@10 -- # set +x 00:05:26.914 [2024-04-18 16:52:42.382921] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:26.914 16:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.914 16:52:42 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:26.914 16:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.914 16:52:42 -- common/autotest_common.sh@10 -- # set +x 00:05:27.171 16:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:27.171 16:52:42 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:27.171 16:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:27.171 16:52:42 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:27.171 16:52:42 -- common/autotest_common.sh@10 -- # set +x 00:05:27.171 16:52:42 -- accel/accel_rpc.sh@42 -- # grep software 00:05:27.171 16:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:27.171 software 00:05:27.171 00:05:27.171 real 0m0.293s 00:05:27.171 user 0m0.039s 00:05:27.171 sys 0m0.005s 00:05:27.171 16:52:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.171 16:52:42 -- common/autotest_common.sh@10 -- # set +x 00:05:27.171 ************************************ 00:05:27.171 END TEST accel_assign_opcode 00:05:27.171 ************************************ 00:05:27.171 16:52:42 -- accel/accel_rpc.sh@55 -- # killprocess 1583560 00:05:27.171 16:52:42 -- common/autotest_common.sh@936 -- # '[' -z 1583560 ']' 00:05:27.171 16:52:42 -- common/autotest_common.sh@940 -- # kill -0 1583560 00:05:27.171 16:52:42 -- common/autotest_common.sh@941 -- # uname 00:05:27.171 16:52:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:27.171 16:52:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1583560 00:05:27.171 16:52:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:27.171 16:52:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:27.171 16:52:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1583560' 00:05:27.171 killing process with pid 1583560 00:05:27.171 16:52:42 -- common/autotest_common.sh@955 -- # kill 1583560 00:05:27.171 16:52:42 -- common/autotest_common.sh@960 -- # wait 1583560 00:05:27.736 00:05:27.736 real 0m1.986s 00:05:27.736 user 0m2.169s 00:05:27.736 sys 0m0.488s 00:05:27.736 16:52:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.736 16:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:27.736 ************************************ 00:05:27.736 END TEST accel_rpc 00:05:27.736 ************************************ 00:05:27.736 16:52:43 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:27.736 16:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.736 16:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.736 16:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:27.736 ************************************ 00:05:27.736 START TEST app_cmdline 00:05:27.736 ************************************ 00:05:27.736 16:52:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:27.736 * Looking for test storage... 00:05:27.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:27.736 16:52:43 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:27.736 16:52:43 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1583915 00:05:27.736 16:52:43 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:27.736 16:52:43 -- app/cmdline.sh@18 -- # waitforlisten 1583915 00:05:27.736 16:52:43 -- common/autotest_common.sh@817 -- # '[' -z 1583915 ']' 00:05:27.736 16:52:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.736 16:52:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.736 16:52:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.736 16:52:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.736 16:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:27.736 [2024-04-18 16:52:43.417027] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:27.736 [2024-04-18 16:52:43.417122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583915 ] 00:05:27.995 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.995 [2024-04-18 16:52:43.474251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.995 [2024-04-18 16:52:43.578730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.253 16:52:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:28.253 16:52:43 -- common/autotest_common.sh@850 -- # return 0 00:05:28.253 16:52:43 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:28.511 { 00:05:28.511 "version": "SPDK v24.05-pre git sha1 ce34c7fd8", 00:05:28.511 "fields": { 00:05:28.511 "major": 24, 00:05:28.511 "minor": 5, 00:05:28.511 "patch": 0, 00:05:28.511 "suffix": "-pre", 00:05:28.511 "commit": "ce34c7fd8" 00:05:28.511 } 00:05:28.511 } 00:05:28.511 16:52:44 -- app/cmdline.sh@22 -- # expected_methods=() 00:05:28.511 16:52:44 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:28.511 16:52:44 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:28.511 16:52:44 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:28.511 16:52:44 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:28.511 16:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.511 16:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:28.511 16:52:44 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:28.511 16:52:44 -- app/cmdline.sh@26 -- # sort 00:05:28.511 16:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:28.511 16:52:44 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:28.511 16:52:44 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:28.511 16:52:44 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:28.511 16:52:44 -- common/autotest_common.sh@638 -- # local es=0 00:05:28.511 16:52:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:28.511 16:52:44 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:28.511 16:52:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:28.511 16:52:44 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:28.511 16:52:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:28.511 16:52:44 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:28.511 16:52:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:28.511 16:52:44 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:28.511 16:52:44 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:28.511 16:52:44 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:28.769 request: 00:05:28.769 { 00:05:28.769 "method": "env_dpdk_get_mem_stats", 00:05:28.769 "req_id": 1 00:05:28.769 } 00:05:28.769 Got JSON-RPC error response 00:05:28.769 response: 00:05:28.769 { 00:05:28.769 "code": -32601, 00:05:28.769 "message": "Method not found" 00:05:28.769 } 00:05:28.769 16:52:44 -- common/autotest_common.sh@641 -- # es=1 00:05:28.769 16:52:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:28.769 16:52:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:28.769 16:52:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:28.769 16:52:44 -- app/cmdline.sh@1 -- # killprocess 1583915 00:05:28.769 16:52:44 -- common/autotest_common.sh@936 -- # '[' -z 1583915 ']' 00:05:28.769 16:52:44 -- common/autotest_common.sh@940 -- # kill -0 1583915 00:05:28.769 16:52:44 -- common/autotest_common.sh@941 -- # uname 00:05:28.769 16:52:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.769 16:52:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1583915 00:05:28.769 16:52:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.769 16:52:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.769 16:52:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1583915' 00:05:28.769 killing process with pid 1583915 00:05:28.769 16:52:44 -- common/autotest_common.sh@955 -- # kill 1583915 00:05:28.769 16:52:44 -- common/autotest_common.sh@960 -- # wait 1583915 00:05:29.336 00:05:29.336 real 0m1.626s 00:05:29.336 user 0m1.997s 00:05:29.336 sys 0m0.466s 00:05:29.336 16:52:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.336 16:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:29.336 ************************************ 00:05:29.336 END TEST app_cmdline 00:05:29.336 ************************************ 00:05:29.336 16:52:44 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:29.336 16:52:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.336 16:52:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.336 16:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:29.595 ************************************ 00:05:29.595 START TEST version 00:05:29.595 ************************************ 00:05:29.595 16:52:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:29.595 * Looking for test storage... 00:05:29.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:29.595 16:52:45 -- app/version.sh@17 -- # get_header_version major 00:05:29.595 16:52:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:29.595 16:52:45 -- app/version.sh@14 -- # cut -f2 00:05:29.595 16:52:45 -- app/version.sh@14 -- # tr -d '"' 00:05:29.595 16:52:45 -- app/version.sh@17 -- # major=24 00:05:29.595 16:52:45 -- app/version.sh@18 -- # get_header_version minor 00:05:29.595 16:52:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:29.595 16:52:45 -- app/version.sh@14 -- # cut -f2 00:05:29.595 16:52:45 -- app/version.sh@14 -- # tr -d '"' 00:05:29.595 16:52:45 -- app/version.sh@18 -- # minor=5 00:05:29.595 16:52:45 -- app/version.sh@19 -- # get_header_version patch 00:05:29.595 16:52:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:29.595 16:52:45 -- app/version.sh@14 -- # cut -f2 00:05:29.595 16:52:45 -- app/version.sh@14 -- # tr -d '"' 00:05:29.595 16:52:45 -- app/version.sh@19 -- # patch=0 00:05:29.595 16:52:45 -- app/version.sh@20 -- # get_header_version suffix 00:05:29.595 16:52:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:29.595 16:52:45 -- app/version.sh@14 -- # cut -f2 00:05:29.595 16:52:45 -- app/version.sh@14 -- # tr -d '"' 00:05:29.595 16:52:45 -- app/version.sh@20 -- # suffix=-pre 00:05:29.595 16:52:45 -- app/version.sh@22 -- # version=24.5 00:05:29.595 16:52:45 -- app/version.sh@25 -- # (( patch != 0 )) 00:05:29.595 16:52:45 -- app/version.sh@28 -- # version=24.5rc0 00:05:29.595 16:52:45 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:29.595 16:52:45 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:29.595 16:52:45 -- app/version.sh@30 -- # py_version=24.5rc0 00:05:29.595 16:52:45 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:29.595 00:05:29.595 real 0m0.109s 00:05:29.595 user 0m0.057s 00:05:29.595 sys 0m0.075s 00:05:29.595 16:52:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.595 16:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:29.595 ************************************ 00:05:29.595 END TEST version 00:05:29.595 ************************************ 00:05:29.595 16:52:45 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:29.595 16:52:45 -- spdk/autotest.sh@194 -- # uname -s 00:05:29.595 16:52:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:29.595 16:52:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:29.595 16:52:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:29.595 16:52:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:29.595 16:52:45 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:05:29.595 16:52:45 -- spdk/autotest.sh@258 -- # timing_exit lib 00:05:29.595 16:52:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:29.595 16:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:29.595 16:52:45 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:29.595 16:52:45 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:05:29.595 16:52:45 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:05:29.595 16:52:45 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:05:29.595 16:52:45 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:05:29.595 16:52:45 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:05:29.595 16:52:45 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:29.595 16:52:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:29.595 16:52:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.595 16:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:29.854 ************************************ 00:05:29.854 START TEST nvmf_tcp 00:05:29.854 ************************************ 00:05:29.854 16:52:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:29.854 * Looking for test storage... 00:05:29.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:29.854 16:52:45 -- nvmf/nvmf.sh@10 -- # uname -s 00:05:29.854 16:52:45 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:29.854 16:52:45 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.854 16:52:45 -- nvmf/common.sh@7 -- # uname -s 00:05:29.854 16:52:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.854 16:52:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.854 16:52:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.854 16:52:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.854 16:52:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.854 16:52:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.854 16:52:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.854 16:52:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.854 16:52:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.854 16:52:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.854 16:52:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:29.854 16:52:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:29.854 16:52:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.854 16:52:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.854 16:52:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:29.854 16:52:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.854 16:52:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:29.854 16:52:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.854 16:52:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.854 16:52:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.854 16:52:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.854 16:52:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.854 16:52:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.854 16:52:45 -- paths/export.sh@5 -- # export PATH 00:05:29.854 16:52:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.854 16:52:45 -- nvmf/common.sh@47 -- # : 0 00:05:29.854 16:52:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:29.854 16:52:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:29.854 16:52:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.854 16:52:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.854 16:52:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.854 16:52:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:29.854 16:52:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:29.854 16:52:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:29.854 16:52:45 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:29.854 16:52:45 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:29.854 16:52:45 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:29.854 16:52:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:29.854 16:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:29.854 16:52:45 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:29.854 16:52:45 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:29.854 16:52:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:29.854 16:52:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.854 16:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:29.854 ************************************ 00:05:29.854 START TEST nvmf_example 00:05:29.854 ************************************ 00:05:29.854 16:52:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:29.854 * Looking for test storage... 00:05:29.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:29.854 16:52:45 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.854 16:52:45 -- nvmf/common.sh@7 -- # uname -s 00:05:29.854 16:52:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.854 16:52:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.854 16:52:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.854 16:52:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.854 16:52:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.854 16:52:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.854 16:52:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.854 16:52:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.854 16:52:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.854 16:52:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.854 16:52:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:29.854 16:52:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:29.854 16:52:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.854 16:52:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.854 16:52:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:29.854 16:52:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.854 16:52:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:29.854 16:52:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.854 16:52:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.854 16:52:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.854 16:52:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.854 16:52:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.854 16:52:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.854 16:52:45 -- paths/export.sh@5 -- # export PATH 00:05:29.854 16:52:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.854 16:52:45 -- nvmf/common.sh@47 -- # : 0 00:05:29.854 16:52:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:29.854 16:52:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:29.855 16:52:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.855 16:52:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.855 16:52:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.855 16:52:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:29.855 16:52:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:29.855 16:52:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:29.855 16:52:45 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:29.855 16:52:45 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:29.855 16:52:45 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:29.855 16:52:45 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:29.855 16:52:45 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:29.855 16:52:45 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:29.855 16:52:45 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:29.855 16:52:45 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:29.855 16:52:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:29.855 16:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:30.114 16:52:45 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:30.114 16:52:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:05:30.114 16:52:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:30.114 16:52:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:30.114 16:52:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:30.114 16:52:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:30.114 16:52:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:30.114 16:52:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:30.114 16:52:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:30.114 16:52:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:05:30.114 16:52:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:30.114 16:52:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:30.114 16:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:32.017 16:52:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:32.017 16:52:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:32.017 16:52:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:32.017 16:52:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:32.017 16:52:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:32.017 16:52:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:32.017 16:52:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:32.017 16:52:47 -- nvmf/common.sh@295 -- # net_devs=() 00:05:32.017 16:52:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:32.017 16:52:47 -- nvmf/common.sh@296 -- # e810=() 00:05:32.017 16:52:47 -- nvmf/common.sh@296 -- # local -ga e810 00:05:32.017 16:52:47 -- nvmf/common.sh@297 -- # x722=() 00:05:32.017 16:52:47 -- nvmf/common.sh@297 -- # local -ga x722 00:05:32.017 16:52:47 -- nvmf/common.sh@298 -- # mlx=() 00:05:32.017 16:52:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:32.017 16:52:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:32.017 16:52:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:32.017 16:52:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:32.017 16:52:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:32.017 16:52:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:32.017 16:52:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:32.017 16:52:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:32.017 16:52:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:32.017 16:52:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:32.017 16:52:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:32.017 16:52:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:32.017 16:52:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:32.017 16:52:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:32.017 16:52:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:32.017 16:52:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:32.017 16:52:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:32.017 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:32.017 16:52:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:32.017 16:52:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:32.017 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:32.017 16:52:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:32.017 16:52:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:32.017 16:52:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:32.017 16:52:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:32.017 16:52:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:32.017 16:52:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:32.017 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:32.017 16:52:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:32.017 16:52:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:32.017 16:52:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:32.017 16:52:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:32.017 16:52:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:32.017 16:52:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:32.017 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:32.017 16:52:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:32.017 16:52:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:32.017 16:52:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:32.017 16:52:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:05:32.017 16:52:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:05:32.017 16:52:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:32.017 16:52:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:32.017 16:52:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:32.017 16:52:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:32.017 16:52:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:32.017 16:52:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:32.017 16:52:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:32.017 16:52:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:32.017 16:52:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:32.017 16:52:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:32.017 16:52:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:32.017 16:52:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:32.017 16:52:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:32.017 16:52:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:32.017 16:52:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:32.017 16:52:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:32.017 16:52:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:32.017 16:52:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:32.017 16:52:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:32.275 16:52:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:32.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:32.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:05:32.275 00:05:32.275 --- 10.0.0.2 ping statistics --- 00:05:32.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:32.275 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:05:32.275 16:52:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:32.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:32.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:05:32.275 00:05:32.275 --- 10.0.0.1 ping statistics --- 00:05:32.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:32.276 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:05:32.276 16:52:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:32.276 16:52:47 -- nvmf/common.sh@411 -- # return 0 00:05:32.276 16:52:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:32.276 16:52:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:32.276 16:52:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:05:32.276 16:52:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:05:32.276 16:52:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:32.276 16:52:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:05:32.276 16:52:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:05:32.276 16:52:47 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:32.276 16:52:47 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:32.276 16:52:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:32.276 16:52:47 -- common/autotest_common.sh@10 -- # set +x 00:05:32.276 16:52:47 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:32.276 16:52:47 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:32.276 16:52:47 -- target/nvmf_example.sh@34 -- # nvmfpid=1585959 00:05:32.276 16:52:47 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:32.276 16:52:47 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:32.276 16:52:47 -- target/nvmf_example.sh@36 -- # waitforlisten 1585959 00:05:32.276 16:52:47 -- common/autotest_common.sh@817 -- # '[' -z 1585959 ']' 00:05:32.276 16:52:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.276 16:52:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.276 16:52:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.276 16:52:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.276 16:52:47 -- common/autotest_common.sh@10 -- # set +x 00:05:32.276 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.207 16:52:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:33.207 16:52:48 -- common/autotest_common.sh@850 -- # return 0 00:05:33.207 16:52:48 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:33.207 16:52:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:33.207 16:52:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.207 16:52:48 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:33.207 16:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.207 16:52:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.207 16:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.207 16:52:48 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:33.207 16:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.207 16:52:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.207 16:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.207 16:52:48 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:33.207 16:52:48 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.207 16:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.207 16:52:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.207 16:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.207 16:52:48 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:33.207 16:52:48 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:33.207 16:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.207 16:52:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.207 16:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.207 16:52:48 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:33.207 16:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.207 16:52:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.207 16:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.207 16:52:48 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:33.207 16:52:48 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:33.207 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.465 Initializing NVMe Controllers 00:05:45.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:45.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:45.465 Initialization complete. Launching workers. 00:05:45.465 ======================================================== 00:05:45.465 Latency(us) 00:05:45.465 Device Information : IOPS MiB/s Average min max 00:05:45.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15109.60 59.02 4236.13 689.89 20019.30 00:05:45.465 ======================================================== 00:05:45.465 Total : 15109.60 59.02 4236.13 689.89 20019.30 00:05:45.465 00:05:45.465 16:52:59 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:45.465 16:52:59 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:45.465 16:52:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:05:45.465 16:52:59 -- nvmf/common.sh@117 -- # sync 00:05:45.465 16:52:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:45.465 16:52:59 -- nvmf/common.sh@120 -- # set +e 00:05:45.465 16:52:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:45.465 16:52:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:45.465 rmmod nvme_tcp 00:05:45.466 rmmod nvme_fabrics 00:05:45.466 rmmod nvme_keyring 00:05:45.466 16:52:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:45.466 16:52:59 -- nvmf/common.sh@124 -- # set -e 00:05:45.466 16:52:59 -- nvmf/common.sh@125 -- # return 0 00:05:45.466 16:52:59 -- nvmf/common.sh@478 -- # '[' -n 1585959 ']' 00:05:45.466 16:52:59 -- nvmf/common.sh@479 -- # killprocess 1585959 00:05:45.466 16:52:59 -- common/autotest_common.sh@936 -- # '[' -z 1585959 ']' 00:05:45.466 16:52:59 -- common/autotest_common.sh@940 -- # kill -0 1585959 00:05:45.466 16:52:59 -- common/autotest_common.sh@941 -- # uname 00:05:45.466 16:52:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.466 16:52:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1585959 00:05:45.466 16:52:59 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:05:45.466 16:52:59 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:05:45.466 16:52:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1585959' 00:05:45.466 killing process with pid 1585959 00:05:45.466 16:52:59 -- common/autotest_common.sh@955 -- # kill 1585959 00:05:45.466 16:52:59 -- common/autotest_common.sh@960 -- # wait 1585959 00:05:45.466 nvmf threads initialize successfully 00:05:45.466 bdev subsystem init successfully 00:05:45.466 created a nvmf target service 00:05:45.466 create targets's poll groups done 00:05:45.466 all subsystems of target started 00:05:45.466 nvmf target is running 00:05:45.466 all subsystems of target stopped 00:05:45.466 destroy targets's poll groups done 00:05:45.466 destroyed the nvmf target service 00:05:45.466 bdev subsystem finish successfully 00:05:45.466 nvmf threads destroy successfully 00:05:45.466 16:52:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:05:45.466 16:52:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:05:45.466 16:52:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:05:45.466 16:52:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:45.466 16:52:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:45.466 16:52:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.466 16:52:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:45.466 16:52:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.723 16:53:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:45.723 16:53:01 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:45.723 16:53:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:45.723 16:53:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.723 00:05:45.723 real 0m15.914s 00:05:45.723 user 0m44.959s 00:05:45.723 sys 0m3.330s 00:05:45.723 16:53:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.723 16:53:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.723 ************************************ 00:05:45.723 END TEST nvmf_example 00:05:45.723 ************************************ 00:05:45.723 16:53:01 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:45.723 16:53:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:45.723 16:53:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.723 16:53:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.984 ************************************ 00:05:45.984 START TEST nvmf_filesystem 00:05:45.984 ************************************ 00:05:45.984 16:53:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:45.984 * Looking for test storage... 00:05:45.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.984 16:53:01 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:45.984 16:53:01 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:45.984 16:53:01 -- common/autotest_common.sh@34 -- # set -e 00:05:45.984 16:53:01 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:45.984 16:53:01 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:45.984 16:53:01 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:45.984 16:53:01 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:45.984 16:53:01 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:45.984 16:53:01 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:45.984 16:53:01 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:45.984 16:53:01 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:45.984 16:53:01 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:45.984 16:53:01 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:45.984 16:53:01 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:45.984 16:53:01 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:45.984 16:53:01 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:45.984 16:53:01 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:45.984 16:53:01 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:45.984 16:53:01 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:45.984 16:53:01 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:45.984 16:53:01 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:45.984 16:53:01 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:45.984 16:53:01 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:45.984 16:53:01 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:45.984 16:53:01 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:45.984 16:53:01 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:45.984 16:53:01 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:45.984 16:53:01 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:45.984 16:53:01 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:45.984 16:53:01 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:45.984 16:53:01 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:45.984 16:53:01 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:45.984 16:53:01 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:45.984 16:53:01 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:45.984 16:53:01 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:45.984 16:53:01 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:45.984 16:53:01 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:45.984 16:53:01 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:45.984 16:53:01 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:45.984 16:53:01 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:45.984 16:53:01 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:45.984 16:53:01 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:45.984 16:53:01 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:45.984 16:53:01 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:45.984 16:53:01 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:45.984 16:53:01 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:45.984 16:53:01 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:45.984 16:53:01 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:45.984 16:53:01 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:45.984 16:53:01 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:45.984 16:53:01 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:45.984 16:53:01 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:45.984 16:53:01 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:45.984 16:53:01 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:05:45.984 16:53:01 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:05:45.984 16:53:01 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:45.984 16:53:01 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:05:45.984 16:53:01 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:05:45.984 16:53:01 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:05:45.984 16:53:01 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:05:45.984 16:53:01 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:05:45.984 16:53:01 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:05:45.984 16:53:01 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:05:45.984 16:53:01 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:05:45.984 16:53:01 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:05:45.984 16:53:01 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:05:45.984 16:53:01 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:05:45.984 16:53:01 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:05:45.984 16:53:01 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:05:45.984 16:53:01 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:05:45.984 16:53:01 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:05:45.984 16:53:01 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:05:45.984 16:53:01 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:05:45.984 16:53:01 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:05:45.984 16:53:01 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:05:45.984 16:53:01 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:45.984 16:53:01 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:05:45.984 16:53:01 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:05:45.984 16:53:01 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:05:45.984 16:53:01 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:05:45.984 16:53:01 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:05:45.984 16:53:01 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:05:45.984 16:53:01 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:05:45.984 16:53:01 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:05:45.984 16:53:01 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:05:45.984 16:53:01 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:05:45.984 16:53:01 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:05:45.984 16:53:01 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:45.984 16:53:01 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:05:45.984 16:53:01 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:05:45.984 16:53:01 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:45.984 16:53:01 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:45.984 16:53:01 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:45.984 16:53:01 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:45.984 16:53:01 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:45.984 16:53:01 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:45.984 16:53:01 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:45.984 16:53:01 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:45.984 16:53:01 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:45.984 16:53:01 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:45.984 16:53:01 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:45.984 16:53:01 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:45.984 16:53:01 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:45.984 16:53:01 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:45.984 16:53:01 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:45.984 16:53:01 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:45.984 #define SPDK_CONFIG_H 00:05:45.984 #define SPDK_CONFIG_APPS 1 00:05:45.984 #define SPDK_CONFIG_ARCH native 00:05:45.984 #undef SPDK_CONFIG_ASAN 00:05:45.984 #undef SPDK_CONFIG_AVAHI 00:05:45.984 #undef SPDK_CONFIG_CET 00:05:45.984 #define SPDK_CONFIG_COVERAGE 1 00:05:45.984 #define SPDK_CONFIG_CROSS_PREFIX 00:05:45.984 #undef SPDK_CONFIG_CRYPTO 00:05:45.984 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:45.984 #undef SPDK_CONFIG_CUSTOMOCF 00:05:45.984 #undef SPDK_CONFIG_DAOS 00:05:45.984 #define SPDK_CONFIG_DAOS_DIR 00:05:45.984 #define SPDK_CONFIG_DEBUG 1 00:05:45.984 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:45.984 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:45.984 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:45.984 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:45.984 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:45.984 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:45.984 #define SPDK_CONFIG_EXAMPLES 1 00:05:45.984 #undef SPDK_CONFIG_FC 00:05:45.984 #define SPDK_CONFIG_FC_PATH 00:05:45.984 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:45.984 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:45.984 #undef SPDK_CONFIG_FUSE 00:05:45.984 #undef SPDK_CONFIG_FUZZER 00:05:45.984 #define SPDK_CONFIG_FUZZER_LIB 00:05:45.984 #undef SPDK_CONFIG_GOLANG 00:05:45.984 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:45.984 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:45.984 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:45.984 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:05:45.984 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:45.984 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:45.984 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:45.984 #define SPDK_CONFIG_IDXD 1 00:05:45.984 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:45.984 #undef SPDK_CONFIG_IPSEC_MB 00:05:45.984 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:45.984 #define SPDK_CONFIG_ISAL 1 00:05:45.984 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:45.984 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:45.984 #define SPDK_CONFIG_LIBDIR 00:05:45.984 #undef SPDK_CONFIG_LTO 00:05:45.985 #define SPDK_CONFIG_MAX_LCORES 00:05:45.985 #define SPDK_CONFIG_NVME_CUSE 1 00:05:45.985 #undef SPDK_CONFIG_OCF 00:05:45.985 #define SPDK_CONFIG_OCF_PATH 00:05:45.985 #define SPDK_CONFIG_OPENSSL_PATH 00:05:45.985 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:45.985 #define SPDK_CONFIG_PGO_DIR 00:05:45.985 #undef SPDK_CONFIG_PGO_USE 00:05:45.985 #define SPDK_CONFIG_PREFIX /usr/local 00:05:45.985 #undef SPDK_CONFIG_RAID5F 00:05:45.985 #undef SPDK_CONFIG_RBD 00:05:45.985 #define SPDK_CONFIG_RDMA 1 00:05:45.985 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:45.985 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:45.985 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:45.985 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:45.985 #define SPDK_CONFIG_SHARED 1 00:05:45.985 #undef SPDK_CONFIG_SMA 00:05:45.985 #define SPDK_CONFIG_TESTS 1 00:05:45.985 #undef SPDK_CONFIG_TSAN 00:05:45.985 #define SPDK_CONFIG_UBLK 1 00:05:45.985 #define SPDK_CONFIG_UBSAN 1 00:05:45.985 #undef SPDK_CONFIG_UNIT_TESTS 00:05:45.985 #undef SPDK_CONFIG_URING 00:05:45.985 #define SPDK_CONFIG_URING_PATH 00:05:45.985 #undef SPDK_CONFIG_URING_ZNS 00:05:45.985 #undef SPDK_CONFIG_USDT 00:05:45.985 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:45.985 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:45.985 #define SPDK_CONFIG_VFIO_USER 1 00:05:45.985 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:45.985 #define SPDK_CONFIG_VHOST 1 00:05:45.985 #define SPDK_CONFIG_VIRTIO 1 00:05:45.985 #undef SPDK_CONFIG_VTUNE 00:05:45.985 #define SPDK_CONFIG_VTUNE_DIR 00:05:45.985 #define SPDK_CONFIG_WERROR 1 00:05:45.985 #define SPDK_CONFIG_WPDK_DIR 00:05:45.985 #undef SPDK_CONFIG_XNVME 00:05:45.985 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:45.985 16:53:01 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:45.985 16:53:01 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.985 16:53:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.985 16:53:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.985 16:53:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.985 16:53:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.985 16:53:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.985 16:53:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.985 16:53:01 -- paths/export.sh@5 -- # export PATH 00:05:45.985 16:53:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.985 16:53:01 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:45.985 16:53:01 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:45.985 16:53:01 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:45.985 16:53:01 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:45.985 16:53:01 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:45.985 16:53:01 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:45.985 16:53:01 -- pm/common@67 -- # TEST_TAG=N/A 00:05:45.985 16:53:01 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:45.985 16:53:01 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:45.985 16:53:01 -- pm/common@71 -- # uname -s 00:05:45.985 16:53:01 -- pm/common@71 -- # PM_OS=Linux 00:05:45.985 16:53:01 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:45.985 16:53:01 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:05:45.985 16:53:01 -- pm/common@76 -- # [[ Linux == Linux ]] 00:05:45.985 16:53:01 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:05:45.985 16:53:01 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:05:45.985 16:53:01 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:45.985 16:53:01 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:45.985 16:53:01 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:05:45.985 16:53:01 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:05:45.985 16:53:01 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:45.985 16:53:01 -- common/autotest_common.sh@57 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:05:45.985 16:53:01 -- common/autotest_common.sh@61 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:45.985 16:53:01 -- common/autotest_common.sh@63 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:05:45.985 16:53:01 -- common/autotest_common.sh@65 -- # : 1 00:05:45.985 16:53:01 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:45.985 16:53:01 -- common/autotest_common.sh@67 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:05:45.985 16:53:01 -- common/autotest_common.sh@69 -- # : 00:05:45.985 16:53:01 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:05:45.985 16:53:01 -- common/autotest_common.sh@71 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:05:45.985 16:53:01 -- common/autotest_common.sh@73 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:05:45.985 16:53:01 -- common/autotest_common.sh@75 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:05:45.985 16:53:01 -- common/autotest_common.sh@77 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:45.985 16:53:01 -- common/autotest_common.sh@79 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:05:45.985 16:53:01 -- common/autotest_common.sh@81 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:05:45.985 16:53:01 -- common/autotest_common.sh@83 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:05:45.985 16:53:01 -- common/autotest_common.sh@85 -- # : 1 00:05:45.985 16:53:01 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:05:45.985 16:53:01 -- common/autotest_common.sh@87 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:05:45.985 16:53:01 -- common/autotest_common.sh@89 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:05:45.985 16:53:01 -- common/autotest_common.sh@91 -- # : 1 00:05:45.985 16:53:01 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:05:45.985 16:53:01 -- common/autotest_common.sh@93 -- # : 1 00:05:45.985 16:53:01 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:05:45.985 16:53:01 -- common/autotest_common.sh@95 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:45.985 16:53:01 -- common/autotest_common.sh@97 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:05:45.985 16:53:01 -- common/autotest_common.sh@99 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:05:45.985 16:53:01 -- common/autotest_common.sh@101 -- # : tcp 00:05:45.985 16:53:01 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:45.985 16:53:01 -- common/autotest_common.sh@103 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:05:45.985 16:53:01 -- common/autotest_common.sh@105 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:05:45.985 16:53:01 -- common/autotest_common.sh@107 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:05:45.985 16:53:01 -- common/autotest_common.sh@109 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:05:45.985 16:53:01 -- common/autotest_common.sh@111 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:05:45.985 16:53:01 -- common/autotest_common.sh@113 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:05:45.985 16:53:01 -- common/autotest_common.sh@115 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:05:45.985 16:53:01 -- common/autotest_common.sh@117 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:45.985 16:53:01 -- common/autotest_common.sh@119 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:05:45.985 16:53:01 -- common/autotest_common.sh@121 -- # : 1 00:05:45.985 16:53:01 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:05:45.985 16:53:01 -- common/autotest_common.sh@123 -- # : 00:05:45.985 16:53:01 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:45.985 16:53:01 -- common/autotest_common.sh@125 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:05:45.985 16:53:01 -- common/autotest_common.sh@127 -- # : 0 00:05:45.985 16:53:01 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:05:45.985 16:53:01 -- common/autotest_common.sh@129 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:05:45.986 16:53:01 -- common/autotest_common.sh@131 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:05:45.986 16:53:01 -- common/autotest_common.sh@133 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:05:45.986 16:53:01 -- common/autotest_common.sh@135 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:05:45.986 16:53:01 -- common/autotest_common.sh@137 -- # : 00:05:45.986 16:53:01 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:05:45.986 16:53:01 -- common/autotest_common.sh@139 -- # : true 00:05:45.986 16:53:01 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:05:45.986 16:53:01 -- common/autotest_common.sh@141 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:05:45.986 16:53:01 -- common/autotest_common.sh@143 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:05:45.986 16:53:01 -- common/autotest_common.sh@145 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:05:45.986 16:53:01 -- common/autotest_common.sh@147 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:05:45.986 16:53:01 -- common/autotest_common.sh@149 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:05:45.986 16:53:01 -- common/autotest_common.sh@151 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:05:45.986 16:53:01 -- common/autotest_common.sh@153 -- # : e810 00:05:45.986 16:53:01 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:05:45.986 16:53:01 -- common/autotest_common.sh@155 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:05:45.986 16:53:01 -- common/autotest_common.sh@157 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:05:45.986 16:53:01 -- common/autotest_common.sh@159 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:05:45.986 16:53:01 -- common/autotest_common.sh@161 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:05:45.986 16:53:01 -- common/autotest_common.sh@163 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:05:45.986 16:53:01 -- common/autotest_common.sh@166 -- # : 00:05:45.986 16:53:01 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:05:45.986 16:53:01 -- common/autotest_common.sh@168 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:05:45.986 16:53:01 -- common/autotest_common.sh@170 -- # : 0 00:05:45.986 16:53:01 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:45.986 16:53:01 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:45.986 16:53:01 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:45.986 16:53:01 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:45.986 16:53:01 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:45.986 16:53:01 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:45.986 16:53:01 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:45.986 16:53:01 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:45.986 16:53:01 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:45.986 16:53:01 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:45.986 16:53:01 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:45.986 16:53:01 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:45.986 16:53:01 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:45.986 16:53:01 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:45.986 16:53:01 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:05:45.986 16:53:01 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:45.986 16:53:01 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:45.986 16:53:01 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:45.986 16:53:01 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:45.986 16:53:01 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:45.986 16:53:01 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:05:45.986 16:53:01 -- common/autotest_common.sh@199 -- # cat 00:05:45.986 16:53:01 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:05:45.986 16:53:01 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:45.986 16:53:01 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:45.986 16:53:01 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:45.986 16:53:01 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:45.986 16:53:01 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:05:45.986 16:53:01 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:05:45.986 16:53:01 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:45.986 16:53:01 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:45.986 16:53:01 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:45.986 16:53:01 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:45.986 16:53:01 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:45.986 16:53:01 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:45.986 16:53:01 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:45.986 16:53:01 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:45.986 16:53:01 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:45.986 16:53:01 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:45.986 16:53:01 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:45.986 16:53:01 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:45.986 16:53:01 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:05:45.986 16:53:01 -- common/autotest_common.sh@252 -- # export valgrind= 00:05:45.986 16:53:01 -- common/autotest_common.sh@252 -- # valgrind= 00:05:45.986 16:53:01 -- common/autotest_common.sh@258 -- # uname -s 00:05:45.986 16:53:01 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:05:45.986 16:53:01 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:05:45.986 16:53:01 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:05:45.986 16:53:01 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:05:45.986 16:53:01 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:05:45.986 16:53:01 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:05:45.986 16:53:01 -- common/autotest_common.sh@268 -- # MAKE=make 00:05:45.986 16:53:01 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:05:45.986 16:53:01 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:05:45.986 16:53:01 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:05:45.986 16:53:01 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:05:45.986 16:53:01 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:05:45.986 16:53:01 -- common/autotest_common.sh@289 -- # for i in "$@" 00:05:45.986 16:53:01 -- common/autotest_common.sh@290 -- # case "$i" in 00:05:45.986 16:53:01 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:05:45.986 16:53:01 -- common/autotest_common.sh@307 -- # [[ -z 1587678 ]] 00:05:45.986 16:53:01 -- common/autotest_common.sh@307 -- # kill -0 1587678 00:05:45.986 16:53:01 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:05:45.986 16:53:01 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:05:45.986 16:53:01 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:05:45.986 16:53:01 -- common/autotest_common.sh@320 -- # local mount target_dir 00:05:45.986 16:53:01 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:05:45.986 16:53:01 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:05:45.986 16:53:01 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:05:45.986 16:53:01 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:05:45.986 16:53:01 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.rWcBuO 00:05:45.986 16:53:01 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:45.987 16:53:01 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:05:45.987 16:53:01 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:05:45.987 16:53:01 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.rWcBuO/tests/target /tmp/spdk.rWcBuO 00:05:45.987 16:53:01 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:05:45.987 16:53:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:45.987 16:53:01 -- common/autotest_common.sh@316 -- # df -T 00:05:45.987 16:53:01 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:05:45.987 16:53:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:05:45.987 16:53:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=996749312 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:05:45.987 16:53:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=4287680512 00:05:45.987 16:53:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=50131755008 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994717184 00:05:45.987 16:53:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=11862962176 00:05:45.987 16:53:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=30943817728 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997356544 00:05:45.987 16:53:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=53538816 00:05:45.987 16:53:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=12390178816 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398944256 00:05:45.987 16:53:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=8765440 00:05:45.987 16:53:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996918272 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997360640 00:05:45.987 16:53:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=442368 00:05:45.987 16:53:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:45.987 16:53:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199463936 00:05:45.987 16:53:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199468032 00:05:45.987 16:53:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:05:45.987 16:53:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:45.987 16:53:01 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:05:45.987 * Looking for test storage... 00:05:45.987 16:53:01 -- common/autotest_common.sh@357 -- # local target_space new_size 00:05:45.987 16:53:01 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:05:45.987 16:53:01 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.987 16:53:01 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:45.987 16:53:01 -- common/autotest_common.sh@361 -- # mount=/ 00:05:45.987 16:53:01 -- common/autotest_common.sh@363 -- # target_space=50131755008 00:05:45.987 16:53:01 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:05:45.987 16:53:01 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:05:45.987 16:53:01 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:05:45.987 16:53:01 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:05:45.987 16:53:01 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:05:45.987 16:53:01 -- common/autotest_common.sh@370 -- # new_size=14077554688 00:05:45.987 16:53:01 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:45.987 16:53:01 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.987 16:53:01 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.987 16:53:01 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.987 16:53:01 -- common/autotest_common.sh@378 -- # return 0 00:05:45.987 16:53:01 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:05:45.987 16:53:01 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:05:45.987 16:53:01 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:45.987 16:53:01 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:45.987 16:53:01 -- common/autotest_common.sh@1673 -- # true 00:05:45.987 16:53:01 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:05:45.987 16:53:01 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:45.987 16:53:01 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:45.987 16:53:01 -- common/autotest_common.sh@27 -- # exec 00:05:45.987 16:53:01 -- common/autotest_common.sh@29 -- # exec 00:05:45.987 16:53:01 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:45.987 16:53:01 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:45.987 16:53:01 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:45.987 16:53:01 -- common/autotest_common.sh@18 -- # set -x 00:05:45.987 16:53:01 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.987 16:53:01 -- nvmf/common.sh@7 -- # uname -s 00:05:45.987 16:53:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.987 16:53:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.987 16:53:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.987 16:53:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.987 16:53:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.987 16:53:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.987 16:53:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.987 16:53:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.987 16:53:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.987 16:53:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.987 16:53:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.987 16:53:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.987 16:53:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.987 16:53:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.987 16:53:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.987 16:53:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.987 16:53:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.987 16:53:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.987 16:53:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.987 16:53:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.987 16:53:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.987 16:53:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.987 16:53:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.987 16:53:01 -- paths/export.sh@5 -- # export PATH 00:05:45.987 16:53:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.987 16:53:01 -- nvmf/common.sh@47 -- # : 0 00:05:45.987 16:53:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:45.987 16:53:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:45.987 16:53:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.987 16:53:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.987 16:53:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.987 16:53:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:45.987 16:53:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:45.987 16:53:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:45.988 16:53:01 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:45.988 16:53:01 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:45.988 16:53:01 -- target/filesystem.sh@15 -- # nvmftestinit 00:05:45.988 16:53:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:05:45.988 16:53:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:45.988 16:53:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:45.988 16:53:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:45.988 16:53:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:45.988 16:53:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.988 16:53:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:45.988 16:53:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.988 16:53:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:05:45.988 16:53:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:45.988 16:53:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:45.988 16:53:01 -- common/autotest_common.sh@10 -- # set +x 00:05:48.526 16:53:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:48.526 16:53:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:48.526 16:53:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:48.526 16:53:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:48.526 16:53:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:48.526 16:53:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:48.526 16:53:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:48.527 16:53:03 -- nvmf/common.sh@295 -- # net_devs=() 00:05:48.527 16:53:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:48.527 16:53:03 -- nvmf/common.sh@296 -- # e810=() 00:05:48.527 16:53:03 -- nvmf/common.sh@296 -- # local -ga e810 00:05:48.527 16:53:03 -- nvmf/common.sh@297 -- # x722=() 00:05:48.527 16:53:03 -- nvmf/common.sh@297 -- # local -ga x722 00:05:48.527 16:53:03 -- nvmf/common.sh@298 -- # mlx=() 00:05:48.527 16:53:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:48.527 16:53:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:48.527 16:53:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:48.527 16:53:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:48.527 16:53:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:48.527 16:53:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:48.527 16:53:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:48.527 16:53:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:48.527 16:53:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:48.527 16:53:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:48.527 16:53:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:48.527 16:53:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:48.527 16:53:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:48.527 16:53:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:48.527 16:53:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:48.527 16:53:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:48.527 16:53:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:48.527 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:48.527 16:53:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:48.527 16:53:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:48.527 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:48.527 16:53:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:48.527 16:53:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:48.527 16:53:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:48.527 16:53:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:48.527 16:53:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:48.527 16:53:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:48.527 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:48.527 16:53:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:48.527 16:53:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:48.527 16:53:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:48.527 16:53:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:48.527 16:53:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:48.527 16:53:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:48.527 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:48.527 16:53:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:48.527 16:53:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:48.527 16:53:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:48.527 16:53:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:05:48.527 16:53:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:48.527 16:53:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:48.527 16:53:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:48.527 16:53:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:48.527 16:53:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:48.527 16:53:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:48.527 16:53:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:48.527 16:53:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:48.527 16:53:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:48.527 16:53:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:48.527 16:53:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:48.527 16:53:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:48.527 16:53:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:48.527 16:53:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:48.527 16:53:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:48.527 16:53:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:48.527 16:53:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:48.527 16:53:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:48.527 16:53:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:48.527 16:53:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:48.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:48.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:05:48.527 00:05:48.527 --- 10.0.0.2 ping statistics --- 00:05:48.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:48.527 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:05:48.527 16:53:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:48.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:48.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:05:48.527 00:05:48.527 --- 10.0.0.1 ping statistics --- 00:05:48.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:48.527 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:05:48.527 16:53:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:48.527 16:53:03 -- nvmf/common.sh@411 -- # return 0 00:05:48.527 16:53:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:48.527 16:53:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:48.527 16:53:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:05:48.527 16:53:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:48.527 16:53:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:05:48.527 16:53:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:05:48.527 16:53:03 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:48.527 16:53:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:48.527 16:53:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.527 16:53:03 -- common/autotest_common.sh@10 -- # set +x 00:05:48.527 ************************************ 00:05:48.527 START TEST nvmf_filesystem_no_in_capsule 00:05:48.527 ************************************ 00:05:48.527 16:53:03 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:05:48.527 16:53:03 -- target/filesystem.sh@47 -- # in_capsule=0 00:05:48.527 16:53:03 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:48.527 16:53:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:05:48.527 16:53:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:48.527 16:53:03 -- common/autotest_common.sh@10 -- # set +x 00:05:48.527 16:53:03 -- nvmf/common.sh@470 -- # nvmfpid=1589314 00:05:48.527 16:53:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:48.527 16:53:03 -- nvmf/common.sh@471 -- # waitforlisten 1589314 00:05:48.527 16:53:03 -- common/autotest_common.sh@817 -- # '[' -z 1589314 ']' 00:05:48.527 16:53:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.527 16:53:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:48.527 16:53:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.527 16:53:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:48.527 16:53:03 -- common/autotest_common.sh@10 -- # set +x 00:05:48.527 [2024-04-18 16:53:04.038029] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:48.527 [2024-04-18 16:53:04.038110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:48.527 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.527 [2024-04-18 16:53:04.102366] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.527 [2024-04-18 16:53:04.213883] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:48.527 [2024-04-18 16:53:04.213945] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:48.527 [2024-04-18 16:53:04.213969] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:48.527 [2024-04-18 16:53:04.213988] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:48.527 [2024-04-18 16:53:04.214004] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:48.527 [2024-04-18 16:53:04.214064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.527 [2024-04-18 16:53:04.214127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.527 [2024-04-18 16:53:04.214191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.527 [2024-04-18 16:53:04.214198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.791 16:53:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.791 16:53:04 -- common/autotest_common.sh@850 -- # return 0 00:05:48.791 16:53:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:05:48.791 16:53:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:48.791 16:53:04 -- common/autotest_common.sh@10 -- # set +x 00:05:48.791 16:53:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:48.791 16:53:04 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:48.791 16:53:04 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:05:48.791 16:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:48.791 16:53:04 -- common/autotest_common.sh@10 -- # set +x 00:05:48.791 [2024-04-18 16:53:04.373169] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.791 16:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:48.791 16:53:04 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:48.791 16:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:48.791 16:53:04 -- common/autotest_common.sh@10 -- # set +x 00:05:49.049 Malloc1 00:05:49.049 16:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:49.049 16:53:04 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:49.049 16:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:49.049 16:53:04 -- common/autotest_common.sh@10 -- # set +x 00:05:49.049 16:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:49.049 16:53:04 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:49.049 16:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:49.049 16:53:04 -- common/autotest_common.sh@10 -- # set +x 00:05:49.049 16:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:49.049 16:53:04 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:49.049 16:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:49.050 16:53:04 -- common/autotest_common.sh@10 -- # set +x 00:05:49.050 [2024-04-18 16:53:04.557745] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:49.050 16:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:49.050 16:53:04 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:49.050 16:53:04 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:05:49.050 16:53:04 -- common/autotest_common.sh@1365 -- # local bdev_info 00:05:49.050 16:53:04 -- common/autotest_common.sh@1366 -- # local bs 00:05:49.050 16:53:04 -- common/autotest_common.sh@1367 -- # local nb 00:05:49.050 16:53:04 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:49.050 16:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:49.050 16:53:04 -- common/autotest_common.sh@10 -- # set +x 00:05:49.050 16:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:49.050 16:53:04 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:05:49.050 { 00:05:49.050 "name": "Malloc1", 00:05:49.050 "aliases": [ 00:05:49.050 "b9e9ae17-6ef8-4a51-ab1b-d30adf125328" 00:05:49.050 ], 00:05:49.050 "product_name": "Malloc disk", 00:05:49.050 "block_size": 512, 00:05:49.050 "num_blocks": 1048576, 00:05:49.050 "uuid": "b9e9ae17-6ef8-4a51-ab1b-d30adf125328", 00:05:49.050 "assigned_rate_limits": { 00:05:49.050 "rw_ios_per_sec": 0, 00:05:49.050 "rw_mbytes_per_sec": 0, 00:05:49.050 "r_mbytes_per_sec": 0, 00:05:49.050 "w_mbytes_per_sec": 0 00:05:49.050 }, 00:05:49.050 "claimed": true, 00:05:49.050 "claim_type": "exclusive_write", 00:05:49.050 "zoned": false, 00:05:49.050 "supported_io_types": { 00:05:49.050 "read": true, 00:05:49.050 "write": true, 00:05:49.050 "unmap": true, 00:05:49.050 "write_zeroes": true, 00:05:49.050 "flush": true, 00:05:49.050 "reset": true, 00:05:49.050 "compare": false, 00:05:49.050 "compare_and_write": false, 00:05:49.050 "abort": true, 00:05:49.050 "nvme_admin": false, 00:05:49.050 "nvme_io": false 00:05:49.050 }, 00:05:49.050 "memory_domains": [ 00:05:49.050 { 00:05:49.050 "dma_device_id": "system", 00:05:49.050 "dma_device_type": 1 00:05:49.050 }, 00:05:49.050 { 00:05:49.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.050 "dma_device_type": 2 00:05:49.050 } 00:05:49.050 ], 00:05:49.050 "driver_specific": {} 00:05:49.050 } 00:05:49.050 ]' 00:05:49.050 16:53:04 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:05:49.050 16:53:04 -- common/autotest_common.sh@1369 -- # bs=512 00:05:49.050 16:53:04 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:05:49.050 16:53:04 -- common/autotest_common.sh@1370 -- # nb=1048576 00:05:49.050 16:53:04 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:05:49.050 16:53:04 -- common/autotest_common.sh@1374 -- # echo 512 00:05:49.050 16:53:04 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:49.050 16:53:04 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:49.615 16:53:05 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:49.615 16:53:05 -- common/autotest_common.sh@1184 -- # local i=0 00:05:49.615 16:53:05 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:05:49.615 16:53:05 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:05:49.615 16:53:05 -- common/autotest_common.sh@1191 -- # sleep 2 00:05:52.138 16:53:07 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:05:52.138 16:53:07 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:05:52.138 16:53:07 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:05:52.138 16:53:07 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:05:52.138 16:53:07 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:05:52.138 16:53:07 -- common/autotest_common.sh@1194 -- # return 0 00:05:52.138 16:53:07 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:52.138 16:53:07 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:52.138 16:53:07 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:52.138 16:53:07 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:52.138 16:53:07 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:52.138 16:53:07 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:52.138 16:53:07 -- setup/common.sh@80 -- # echo 536870912 00:05:52.138 16:53:07 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:52.138 16:53:07 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:52.138 16:53:07 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:52.138 16:53:07 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:52.138 16:53:07 -- target/filesystem.sh@69 -- # partprobe 00:05:53.070 16:53:08 -- target/filesystem.sh@70 -- # sleep 1 00:05:54.003 16:53:09 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:05:54.003 16:53:09 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:05:54.003 16:53:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:54.003 16:53:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.003 16:53:09 -- common/autotest_common.sh@10 -- # set +x 00:05:54.003 ************************************ 00:05:54.003 START TEST filesystem_ext4 00:05:54.003 ************************************ 00:05:54.003 16:53:09 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:05:54.003 16:53:09 -- target/filesystem.sh@18 -- # fstype=ext4 00:05:54.003 16:53:09 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:54.003 16:53:09 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:05:54.003 16:53:09 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:05:54.003 16:53:09 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:05:54.003 16:53:09 -- common/autotest_common.sh@914 -- # local i=0 00:05:54.003 16:53:09 -- common/autotest_common.sh@915 -- # local force 00:05:54.003 16:53:09 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:05:54.003 16:53:09 -- common/autotest_common.sh@918 -- # force=-F 00:05:54.003 16:53:09 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:05:54.003 mke2fs 1.46.5 (30-Dec-2021) 00:05:54.003 Discarding device blocks: 0/522240 done 00:05:54.003 Creating filesystem with 522240 1k blocks and 130560 inodes 00:05:54.003 Filesystem UUID: 12867fd5-d970-4288-8f76-d33ffca72e43 00:05:54.003 Superblock backups stored on blocks: 00:05:54.003 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:05:54.003 00:05:54.003 Allocating group tables: 0/64 done 00:05:54.003 Writing inode tables: 0/64 done 00:05:54.259 Creating journal (8192 blocks): done 00:05:54.259 Writing superblocks and filesystem accounting information: 0/64 done 00:05:54.259 00:05:54.259 16:53:09 -- common/autotest_common.sh@931 -- # return 0 00:05:54.259 16:53:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:54.260 16:53:09 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:54.517 16:53:09 -- target/filesystem.sh@25 -- # sync 00:05:54.517 16:53:09 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:54.517 16:53:10 -- target/filesystem.sh@27 -- # sync 00:05:54.517 16:53:10 -- target/filesystem.sh@29 -- # i=0 00:05:54.517 16:53:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:54.517 16:53:10 -- target/filesystem.sh@37 -- # kill -0 1589314 00:05:54.517 16:53:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:54.517 16:53:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:54.517 16:53:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:54.517 16:53:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:54.517 00:05:54.517 real 0m0.482s 00:05:54.517 user 0m0.010s 00:05:54.517 sys 0m0.035s 00:05:54.517 16:53:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.517 16:53:10 -- common/autotest_common.sh@10 -- # set +x 00:05:54.517 ************************************ 00:05:54.517 END TEST filesystem_ext4 00:05:54.517 ************************************ 00:05:54.517 16:53:10 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:05:54.517 16:53:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:54.517 16:53:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.517 16:53:10 -- common/autotest_common.sh@10 -- # set +x 00:05:54.517 ************************************ 00:05:54.517 START TEST filesystem_btrfs 00:05:54.517 ************************************ 00:05:54.517 16:53:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:05:54.517 16:53:10 -- target/filesystem.sh@18 -- # fstype=btrfs 00:05:54.517 16:53:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:54.517 16:53:10 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:05:54.517 16:53:10 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:05:54.517 16:53:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:05:54.517 16:53:10 -- common/autotest_common.sh@914 -- # local i=0 00:05:54.517 16:53:10 -- common/autotest_common.sh@915 -- # local force 00:05:54.517 16:53:10 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:05:54.517 16:53:10 -- common/autotest_common.sh@920 -- # force=-f 00:05:54.517 16:53:10 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:05:55.082 btrfs-progs v6.6.2 00:05:55.082 See https://btrfs.readthedocs.io for more information. 00:05:55.082 00:05:55.082 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:05:55.082 NOTE: several default settings have changed in version 5.15, please make sure 00:05:55.082 this does not affect your deployments: 00:05:55.082 - DUP for metadata (-m dup) 00:05:55.082 - enabled no-holes (-O no-holes) 00:05:55.082 - enabled free-space-tree (-R free-space-tree) 00:05:55.082 00:05:55.082 Label: (null) 00:05:55.082 UUID: e94ed680-82ca-4cbb-a09d-9268ba5339ec 00:05:55.082 Node size: 16384 00:05:55.082 Sector size: 4096 00:05:55.082 Filesystem size: 510.00MiB 00:05:55.082 Block group profiles: 00:05:55.082 Data: single 8.00MiB 00:05:55.082 Metadata: DUP 32.00MiB 00:05:55.082 System: DUP 8.00MiB 00:05:55.082 SSD detected: yes 00:05:55.082 Zoned device: no 00:05:55.082 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:05:55.082 Runtime features: free-space-tree 00:05:55.082 Checksum: crc32c 00:05:55.082 Number of devices: 1 00:05:55.082 Devices: 00:05:55.082 ID SIZE PATH 00:05:55.082 1 510.00MiB /dev/nvme0n1p1 00:05:55.082 00:05:55.082 16:53:10 -- common/autotest_common.sh@931 -- # return 0 00:05:55.082 16:53:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:55.082 16:53:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:55.082 16:53:10 -- target/filesystem.sh@25 -- # sync 00:05:55.082 16:53:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:55.082 16:53:10 -- target/filesystem.sh@27 -- # sync 00:05:55.082 16:53:10 -- target/filesystem.sh@29 -- # i=0 00:05:55.082 16:53:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:55.082 16:53:10 -- target/filesystem.sh@37 -- # kill -0 1589314 00:05:55.082 16:53:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:55.082 16:53:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:55.082 16:53:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:55.082 16:53:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:55.082 00:05:55.082 real 0m0.570s 00:05:55.082 user 0m0.015s 00:05:55.082 sys 0m0.055s 00:05:55.082 16:53:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.082 16:53:10 -- common/autotest_common.sh@10 -- # set +x 00:05:55.082 ************************************ 00:05:55.082 END TEST filesystem_btrfs 00:05:55.082 ************************************ 00:05:55.082 16:53:10 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:05:55.082 16:53:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:55.082 16:53:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.082 16:53:10 -- common/autotest_common.sh@10 -- # set +x 00:05:55.340 ************************************ 00:05:55.340 START TEST filesystem_xfs 00:05:55.340 ************************************ 00:05:55.340 16:53:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:05:55.340 16:53:10 -- target/filesystem.sh@18 -- # fstype=xfs 00:05:55.340 16:53:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:55.340 16:53:10 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:05:55.340 16:53:10 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:05:55.340 16:53:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:05:55.340 16:53:10 -- common/autotest_common.sh@914 -- # local i=0 00:05:55.340 16:53:10 -- common/autotest_common.sh@915 -- # local force 00:05:55.340 16:53:10 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:05:55.340 16:53:10 -- common/autotest_common.sh@920 -- # force=-f 00:05:55.340 16:53:10 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:05:55.340 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:05:55.340 = sectsz=512 attr=2, projid32bit=1 00:05:55.340 = crc=1 finobt=1, sparse=1, rmapbt=0 00:05:55.340 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:05:55.340 data = bsize=4096 blocks=130560, imaxpct=25 00:05:55.340 = sunit=0 swidth=0 blks 00:05:55.340 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:05:55.340 log =internal log bsize=4096 blocks=16384, version=2 00:05:55.340 = sectsz=512 sunit=0 blks, lazy-count=1 00:05:55.340 realtime =none extsz=4096 blocks=0, rtextents=0 00:05:56.272 Discarding blocks...Done. 00:05:56.272 16:53:11 -- common/autotest_common.sh@931 -- # return 0 00:05:56.272 16:53:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:58.165 16:53:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:58.422 16:53:13 -- target/filesystem.sh@25 -- # sync 00:05:58.422 16:53:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:58.422 16:53:13 -- target/filesystem.sh@27 -- # sync 00:05:58.422 16:53:13 -- target/filesystem.sh@29 -- # i=0 00:05:58.422 16:53:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:58.422 16:53:13 -- target/filesystem.sh@37 -- # kill -0 1589314 00:05:58.422 16:53:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:58.422 16:53:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:58.422 16:53:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:58.422 16:53:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:58.422 00:05:58.422 real 0m3.094s 00:05:58.422 user 0m0.012s 00:05:58.422 sys 0m0.042s 00:05:58.422 16:53:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.422 16:53:13 -- common/autotest_common.sh@10 -- # set +x 00:05:58.422 ************************************ 00:05:58.422 END TEST filesystem_xfs 00:05:58.422 ************************************ 00:05:58.422 16:53:13 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:05:58.681 16:53:14 -- target/filesystem.sh@93 -- # sync 00:05:58.681 16:53:14 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:05:58.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:05:58.681 16:53:14 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:05:58.681 16:53:14 -- common/autotest_common.sh@1205 -- # local i=0 00:05:58.681 16:53:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:05:58.681 16:53:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:58.681 16:53:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:05:58.681 16:53:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:58.681 16:53:14 -- common/autotest_common.sh@1217 -- # return 0 00:05:58.681 16:53:14 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:58.681 16:53:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.681 16:53:14 -- common/autotest_common.sh@10 -- # set +x 00:05:58.681 16:53:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.681 16:53:14 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:05:58.681 16:53:14 -- target/filesystem.sh@101 -- # killprocess 1589314 00:05:58.681 16:53:14 -- common/autotest_common.sh@936 -- # '[' -z 1589314 ']' 00:05:58.681 16:53:14 -- common/autotest_common.sh@940 -- # kill -0 1589314 00:05:58.681 16:53:14 -- common/autotest_common.sh@941 -- # uname 00:05:58.681 16:53:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.681 16:53:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1589314 00:05:58.681 16:53:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.681 16:53:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.681 16:53:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1589314' 00:05:58.681 killing process with pid 1589314 00:05:58.681 16:53:14 -- common/autotest_common.sh@955 -- # kill 1589314 00:05:58.681 16:53:14 -- common/autotest_common.sh@960 -- # wait 1589314 00:05:59.248 16:53:14 -- target/filesystem.sh@102 -- # nvmfpid= 00:05:59.248 00:05:59.248 real 0m10.798s 00:05:59.248 user 0m41.209s 00:05:59.248 sys 0m1.733s 00:05:59.248 16:53:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.248 16:53:14 -- common/autotest_common.sh@10 -- # set +x 00:05:59.248 ************************************ 00:05:59.248 END TEST nvmf_filesystem_no_in_capsule 00:05:59.248 ************************************ 00:05:59.248 16:53:14 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:05:59.248 16:53:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:59.248 16:53:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.248 16:53:14 -- common/autotest_common.sh@10 -- # set +x 00:05:59.248 ************************************ 00:05:59.248 START TEST nvmf_filesystem_in_capsule 00:05:59.248 ************************************ 00:05:59.248 16:53:14 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:05:59.248 16:53:14 -- target/filesystem.sh@47 -- # in_capsule=4096 00:05:59.248 16:53:14 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:59.248 16:53:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:05:59.248 16:53:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:59.248 16:53:14 -- common/autotest_common.sh@10 -- # set +x 00:05:59.248 16:53:14 -- nvmf/common.sh@470 -- # nvmfpid=1590887 00:05:59.248 16:53:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:59.248 16:53:14 -- nvmf/common.sh@471 -- # waitforlisten 1590887 00:05:59.248 16:53:14 -- common/autotest_common.sh@817 -- # '[' -z 1590887 ']' 00:05:59.248 16:53:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.248 16:53:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.248 16:53:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.248 16:53:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.248 16:53:14 -- common/autotest_common.sh@10 -- # set +x 00:05:59.506 [2024-04-18 16:53:14.971350] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:05:59.506 [2024-04-18 16:53:14.971440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:59.506 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.506 [2024-04-18 16:53:15.039579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.506 [2024-04-18 16:53:15.158171] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:59.507 [2024-04-18 16:53:15.158250] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:59.507 [2024-04-18 16:53:15.158276] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:59.507 [2024-04-18 16:53:15.158298] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:59.507 [2024-04-18 16:53:15.158316] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:59.507 [2024-04-18 16:53:15.158415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.507 [2024-04-18 16:53:15.158462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.507 [2024-04-18 16:53:15.158496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.507 [2024-04-18 16:53:15.158503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.438 16:53:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.439 16:53:15 -- common/autotest_common.sh@850 -- # return 0 00:06:00.439 16:53:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:00.439 16:53:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:00.439 16:53:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.439 16:53:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:00.439 16:53:15 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:00.439 16:53:15 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:00.439 16:53:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.439 16:53:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.439 [2024-04-18 16:53:15.911340] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.439 16:53:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.439 16:53:15 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:00.439 16:53:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.439 16:53:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.439 Malloc1 00:06:00.439 16:53:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.439 16:53:16 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:00.439 16:53:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.439 16:53:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.439 16:53:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.439 16:53:16 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:00.439 16:53:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.439 16:53:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.439 16:53:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.439 16:53:16 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:00.439 16:53:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.439 16:53:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.439 [2024-04-18 16:53:16.102903] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:00.439 16:53:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.439 16:53:16 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:00.439 16:53:16 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:00.439 16:53:16 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:00.439 16:53:16 -- common/autotest_common.sh@1366 -- # local bs 00:06:00.439 16:53:16 -- common/autotest_common.sh@1367 -- # local nb 00:06:00.439 16:53:16 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:00.439 16:53:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.439 16:53:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.439 16:53:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.439 16:53:16 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:00.439 { 00:06:00.439 "name": "Malloc1", 00:06:00.439 "aliases": [ 00:06:00.439 "94676409-e55c-4c4d-8872-699cea884206" 00:06:00.439 ], 00:06:00.439 "product_name": "Malloc disk", 00:06:00.439 "block_size": 512, 00:06:00.439 "num_blocks": 1048576, 00:06:00.439 "uuid": "94676409-e55c-4c4d-8872-699cea884206", 00:06:00.439 "assigned_rate_limits": { 00:06:00.439 "rw_ios_per_sec": 0, 00:06:00.439 "rw_mbytes_per_sec": 0, 00:06:00.439 "r_mbytes_per_sec": 0, 00:06:00.439 "w_mbytes_per_sec": 0 00:06:00.439 }, 00:06:00.439 "claimed": true, 00:06:00.439 "claim_type": "exclusive_write", 00:06:00.439 "zoned": false, 00:06:00.439 "supported_io_types": { 00:06:00.439 "read": true, 00:06:00.439 "write": true, 00:06:00.439 "unmap": true, 00:06:00.439 "write_zeroes": true, 00:06:00.439 "flush": true, 00:06:00.439 "reset": true, 00:06:00.439 "compare": false, 00:06:00.439 "compare_and_write": false, 00:06:00.439 "abort": true, 00:06:00.439 "nvme_admin": false, 00:06:00.439 "nvme_io": false 00:06:00.439 }, 00:06:00.439 "memory_domains": [ 00:06:00.439 { 00:06:00.439 "dma_device_id": "system", 00:06:00.439 "dma_device_type": 1 00:06:00.439 }, 00:06:00.439 { 00:06:00.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.439 "dma_device_type": 2 00:06:00.439 } 00:06:00.439 ], 00:06:00.439 "driver_specific": {} 00:06:00.439 } 00:06:00.439 ]' 00:06:00.439 16:53:16 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:00.696 16:53:16 -- common/autotest_common.sh@1369 -- # bs=512 00:06:00.696 16:53:16 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:00.696 16:53:16 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:00.696 16:53:16 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:00.696 16:53:16 -- common/autotest_common.sh@1374 -- # echo 512 00:06:00.696 16:53:16 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:00.696 16:53:16 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:01.260 16:53:16 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:01.260 16:53:16 -- common/autotest_common.sh@1184 -- # local i=0 00:06:01.260 16:53:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:01.260 16:53:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:01.260 16:53:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:03.156 16:53:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:03.156 16:53:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:03.156 16:53:18 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:03.156 16:53:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:03.156 16:53:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:03.156 16:53:18 -- common/autotest_common.sh@1194 -- # return 0 00:06:03.156 16:53:18 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:03.156 16:53:18 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:03.156 16:53:18 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:03.156 16:53:18 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:03.156 16:53:18 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:03.156 16:53:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:03.156 16:53:18 -- setup/common.sh@80 -- # echo 536870912 00:06:03.156 16:53:18 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:03.156 16:53:18 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:03.156 16:53:18 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:03.156 16:53:18 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:03.413 16:53:18 -- target/filesystem.sh@69 -- # partprobe 00:06:04.345 16:53:19 -- target/filesystem.sh@70 -- # sleep 1 00:06:05.314 16:53:20 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:05.314 16:53:20 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:05.314 16:53:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:05.314 16:53:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.314 16:53:20 -- common/autotest_common.sh@10 -- # set +x 00:06:05.314 ************************************ 00:06:05.314 START TEST filesystem_in_capsule_ext4 00:06:05.314 ************************************ 00:06:05.314 16:53:20 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:05.314 16:53:20 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:05.314 16:53:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:05.314 16:53:20 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:05.314 16:53:20 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:05.314 16:53:20 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:05.314 16:53:20 -- common/autotest_common.sh@914 -- # local i=0 00:06:05.314 16:53:20 -- common/autotest_common.sh@915 -- # local force 00:06:05.314 16:53:20 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:05.314 16:53:20 -- common/autotest_common.sh@918 -- # force=-F 00:06:05.314 16:53:20 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:05.314 mke2fs 1.46.5 (30-Dec-2021) 00:06:05.314 Discarding device blocks: 0/522240 done 00:06:05.572 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:05.572 Filesystem UUID: 5e0a794f-db59-48dd-a006-216dcd33e05c 00:06:05.572 Superblock backups stored on blocks: 00:06:05.572 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:05.572 00:06:05.572 Allocating group tables: 0/64 done 00:06:05.572 Writing inode tables: 0/64 done 00:06:06.943 Creating journal (8192 blocks): done 00:06:06.943 Writing superblocks and filesystem accounting information: 0/64 done 00:06:06.943 00:06:06.943 16:53:22 -- common/autotest_common.sh@931 -- # return 0 00:06:06.943 16:53:22 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:07.507 16:53:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:07.507 16:53:23 -- target/filesystem.sh@25 -- # sync 00:06:07.507 16:53:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:07.507 16:53:23 -- target/filesystem.sh@27 -- # sync 00:06:07.507 16:53:23 -- target/filesystem.sh@29 -- # i=0 00:06:07.507 16:53:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:07.507 16:53:23 -- target/filesystem.sh@37 -- # kill -0 1590887 00:06:07.507 16:53:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:07.507 16:53:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:07.767 16:53:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:07.767 16:53:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:07.767 00:06:07.767 real 0m2.316s 00:06:07.767 user 0m0.014s 00:06:07.767 sys 0m0.042s 00:06:07.767 16:53:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.767 16:53:23 -- common/autotest_common.sh@10 -- # set +x 00:06:07.767 ************************************ 00:06:07.767 END TEST filesystem_in_capsule_ext4 00:06:07.767 ************************************ 00:06:07.767 16:53:23 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:07.767 16:53:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:07.767 16:53:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.767 16:53:23 -- common/autotest_common.sh@10 -- # set +x 00:06:07.767 ************************************ 00:06:07.767 START TEST filesystem_in_capsule_btrfs 00:06:07.767 ************************************ 00:06:07.767 16:53:23 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:07.767 16:53:23 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:07.767 16:53:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:07.767 16:53:23 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:07.767 16:53:23 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:07.767 16:53:23 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:07.767 16:53:23 -- common/autotest_common.sh@914 -- # local i=0 00:06:07.767 16:53:23 -- common/autotest_common.sh@915 -- # local force 00:06:07.767 16:53:23 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:07.767 16:53:23 -- common/autotest_common.sh@920 -- # force=-f 00:06:07.767 16:53:23 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:08.024 btrfs-progs v6.6.2 00:06:08.024 See https://btrfs.readthedocs.io for more information. 00:06:08.024 00:06:08.024 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:08.024 NOTE: several default settings have changed in version 5.15, please make sure 00:06:08.024 this does not affect your deployments: 00:06:08.024 - DUP for metadata (-m dup) 00:06:08.024 - enabled no-holes (-O no-holes) 00:06:08.024 - enabled free-space-tree (-R free-space-tree) 00:06:08.024 00:06:08.024 Label: (null) 00:06:08.024 UUID: 020c9614-606c-49b1-8184-0726a4ebf074 00:06:08.024 Node size: 16384 00:06:08.024 Sector size: 4096 00:06:08.024 Filesystem size: 510.00MiB 00:06:08.024 Block group profiles: 00:06:08.024 Data: single 8.00MiB 00:06:08.024 Metadata: DUP 32.00MiB 00:06:08.024 System: DUP 8.00MiB 00:06:08.024 SSD detected: yes 00:06:08.024 Zoned device: no 00:06:08.024 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:08.024 Runtime features: free-space-tree 00:06:08.024 Checksum: crc32c 00:06:08.024 Number of devices: 1 00:06:08.024 Devices: 00:06:08.024 ID SIZE PATH 00:06:08.024 1 510.00MiB /dev/nvme0n1p1 00:06:08.024 00:06:08.024 16:53:23 -- common/autotest_common.sh@931 -- # return 0 00:06:08.024 16:53:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:08.281 16:53:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:08.281 16:53:23 -- target/filesystem.sh@25 -- # sync 00:06:08.281 16:53:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:08.281 16:53:23 -- target/filesystem.sh@27 -- # sync 00:06:08.281 16:53:23 -- target/filesystem.sh@29 -- # i=0 00:06:08.281 16:53:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:08.281 16:53:23 -- target/filesystem.sh@37 -- # kill -0 1590887 00:06:08.281 16:53:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:08.281 16:53:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:08.281 16:53:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:08.281 16:53:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:08.281 00:06:08.281 real 0m0.441s 00:06:08.281 user 0m0.015s 00:06:08.281 sys 0m0.044s 00:06:08.281 16:53:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.281 16:53:23 -- common/autotest_common.sh@10 -- # set +x 00:06:08.282 ************************************ 00:06:08.282 END TEST filesystem_in_capsule_btrfs 00:06:08.282 ************************************ 00:06:08.282 16:53:23 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:08.282 16:53:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:08.282 16:53:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.282 16:53:23 -- common/autotest_common.sh@10 -- # set +x 00:06:08.282 ************************************ 00:06:08.282 START TEST filesystem_in_capsule_xfs 00:06:08.282 ************************************ 00:06:08.282 16:53:23 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:08.282 16:53:23 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:08.282 16:53:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:08.282 16:53:23 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:08.282 16:53:23 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:08.282 16:53:23 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:08.282 16:53:23 -- common/autotest_common.sh@914 -- # local i=0 00:06:08.282 16:53:23 -- common/autotest_common.sh@915 -- # local force 00:06:08.282 16:53:23 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:08.282 16:53:23 -- common/autotest_common.sh@920 -- # force=-f 00:06:08.282 16:53:23 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:08.539 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:08.539 = sectsz=512 attr=2, projid32bit=1 00:06:08.539 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:08.539 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:08.539 data = bsize=4096 blocks=130560, imaxpct=25 00:06:08.539 = sunit=0 swidth=0 blks 00:06:08.539 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:08.539 log =internal log bsize=4096 blocks=16384, version=2 00:06:08.539 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:08.539 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:09.470 Discarding blocks...Done. 00:06:09.470 16:53:24 -- common/autotest_common.sh@931 -- # return 0 00:06:09.470 16:53:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:11.994 16:53:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:11.994 16:53:27 -- target/filesystem.sh@25 -- # sync 00:06:11.994 16:53:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:11.994 16:53:27 -- target/filesystem.sh@27 -- # sync 00:06:11.994 16:53:27 -- target/filesystem.sh@29 -- # i=0 00:06:11.994 16:53:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:11.994 16:53:27 -- target/filesystem.sh@37 -- # kill -0 1590887 00:06:11.994 16:53:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:11.994 16:53:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:11.994 16:53:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:11.994 16:53:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:11.994 00:06:11.994 real 0m3.740s 00:06:11.994 user 0m0.019s 00:06:11.994 sys 0m0.036s 00:06:11.994 16:53:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.994 16:53:27 -- common/autotest_common.sh@10 -- # set +x 00:06:11.994 ************************************ 00:06:11.994 END TEST filesystem_in_capsule_xfs 00:06:11.994 ************************************ 00:06:11.994 16:53:27 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:12.252 16:53:27 -- target/filesystem.sh@93 -- # sync 00:06:12.252 16:53:27 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:12.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:12.252 16:53:27 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:12.252 16:53:27 -- common/autotest_common.sh@1205 -- # local i=0 00:06:12.252 16:53:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:12.252 16:53:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:12.252 16:53:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:12.252 16:53:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:12.252 16:53:27 -- common/autotest_common.sh@1217 -- # return 0 00:06:12.252 16:53:27 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:12.252 16:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.252 16:53:27 -- common/autotest_common.sh@10 -- # set +x 00:06:12.252 16:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.252 16:53:27 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:12.252 16:53:27 -- target/filesystem.sh@101 -- # killprocess 1590887 00:06:12.252 16:53:27 -- common/autotest_common.sh@936 -- # '[' -z 1590887 ']' 00:06:12.252 16:53:27 -- common/autotest_common.sh@940 -- # kill -0 1590887 00:06:12.252 16:53:27 -- common/autotest_common.sh@941 -- # uname 00:06:12.252 16:53:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.252 16:53:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1590887 00:06:12.252 16:53:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.252 16:53:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.252 16:53:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1590887' 00:06:12.252 killing process with pid 1590887 00:06:12.252 16:53:27 -- common/autotest_common.sh@955 -- # kill 1590887 00:06:12.252 16:53:27 -- common/autotest_common.sh@960 -- # wait 1590887 00:06:12.819 16:53:28 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:12.819 00:06:12.819 real 0m13.402s 00:06:12.819 user 0m51.552s 00:06:12.820 sys 0m1.942s 00:06:12.820 16:53:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.820 16:53:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.820 ************************************ 00:06:12.820 END TEST nvmf_filesystem_in_capsule 00:06:12.820 ************************************ 00:06:12.820 16:53:28 -- target/filesystem.sh@108 -- # nvmftestfini 00:06:12.820 16:53:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:12.820 16:53:28 -- nvmf/common.sh@117 -- # sync 00:06:12.820 16:53:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:12.820 16:53:28 -- nvmf/common.sh@120 -- # set +e 00:06:12.820 16:53:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:12.820 16:53:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:12.820 rmmod nvme_tcp 00:06:12.820 rmmod nvme_fabrics 00:06:12.820 rmmod nvme_keyring 00:06:12.820 16:53:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:12.820 16:53:28 -- nvmf/common.sh@124 -- # set -e 00:06:12.820 16:53:28 -- nvmf/common.sh@125 -- # return 0 00:06:12.820 16:53:28 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:06:12.820 16:53:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:12.820 16:53:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:12.820 16:53:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:12.820 16:53:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:12.820 16:53:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:12.820 16:53:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.820 16:53:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:12.820 16:53:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.357 16:53:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:15.357 00:06:15.357 real 0m28.917s 00:06:15.357 user 1m33.787s 00:06:15.357 sys 0m5.350s 00:06:15.357 16:53:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.357 16:53:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.357 ************************************ 00:06:15.357 END TEST nvmf_filesystem 00:06:15.357 ************************************ 00:06:15.357 16:53:30 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:15.357 16:53:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:15.357 16:53:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.357 16:53:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.357 ************************************ 00:06:15.357 START TEST nvmf_discovery 00:06:15.357 ************************************ 00:06:15.357 16:53:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:15.357 * Looking for test storage... 00:06:15.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.357 16:53:30 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.357 16:53:30 -- nvmf/common.sh@7 -- # uname -s 00:06:15.357 16:53:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.357 16:53:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.357 16:53:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.357 16:53:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.357 16:53:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.357 16:53:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.357 16:53:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.357 16:53:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.357 16:53:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.357 16:53:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.357 16:53:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.357 16:53:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.357 16:53:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.357 16:53:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.357 16:53:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.357 16:53:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.357 16:53:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.357 16:53:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.357 16:53:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.357 16:53:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.357 16:53:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.357 16:53:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.357 16:53:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.357 16:53:30 -- paths/export.sh@5 -- # export PATH 00:06:15.357 16:53:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.357 16:53:30 -- nvmf/common.sh@47 -- # : 0 00:06:15.357 16:53:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.357 16:53:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.357 16:53:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.357 16:53:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.357 16:53:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.357 16:53:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.357 16:53:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.357 16:53:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.357 16:53:30 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:15.357 16:53:30 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:15.357 16:53:30 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:15.357 16:53:30 -- target/discovery.sh@15 -- # hash nvme 00:06:15.357 16:53:30 -- target/discovery.sh@20 -- # nvmftestinit 00:06:15.357 16:53:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:15.357 16:53:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.357 16:53:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:15.357 16:53:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:15.357 16:53:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:15.357 16:53:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.357 16:53:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:15.357 16:53:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.357 16:53:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:15.357 16:53:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:15.357 16:53:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:15.357 16:53:30 -- common/autotest_common.sh@10 -- # set +x 00:06:17.260 16:53:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:17.260 16:53:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:17.260 16:53:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:17.260 16:53:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:17.260 16:53:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:17.260 16:53:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:17.260 16:53:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:17.260 16:53:32 -- nvmf/common.sh@295 -- # net_devs=() 00:06:17.260 16:53:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:17.260 16:53:32 -- nvmf/common.sh@296 -- # e810=() 00:06:17.260 16:53:32 -- nvmf/common.sh@296 -- # local -ga e810 00:06:17.260 16:53:32 -- nvmf/common.sh@297 -- # x722=() 00:06:17.260 16:53:32 -- nvmf/common.sh@297 -- # local -ga x722 00:06:17.260 16:53:32 -- nvmf/common.sh@298 -- # mlx=() 00:06:17.260 16:53:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:17.260 16:53:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.260 16:53:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.260 16:53:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.260 16:53:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.260 16:53:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.260 16:53:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.260 16:53:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.260 16:53:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.260 16:53:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.260 16:53:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.260 16:53:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.260 16:53:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:17.260 16:53:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:17.260 16:53:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:17.260 16:53:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:17.260 16:53:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:17.260 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:17.260 16:53:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:17.260 16:53:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:17.260 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:17.260 16:53:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:17.260 16:53:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:17.260 16:53:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.260 16:53:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:17.260 16:53:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.260 16:53:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:17.260 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:17.260 16:53:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.260 16:53:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:17.260 16:53:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.260 16:53:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:17.260 16:53:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.260 16:53:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:17.260 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:17.260 16:53:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.260 16:53:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:17.260 16:53:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:17.260 16:53:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:17.260 16:53:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:17.260 16:53:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:17.260 16:53:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:17.260 16:53:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:17.260 16:53:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:17.260 16:53:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:17.260 16:53:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:17.260 16:53:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:17.260 16:53:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:17.260 16:53:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:17.260 16:53:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:17.260 16:53:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:17.260 16:53:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:17.260 16:53:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:17.260 16:53:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:17.260 16:53:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:17.260 16:53:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:17.260 16:53:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:17.260 16:53:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:17.260 16:53:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:17.260 16:53:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:17.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:17.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:06:17.260 00:06:17.260 --- 10.0.0.2 ping statistics --- 00:06:17.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.260 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:06:17.260 16:53:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:17.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:17.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:06:17.260 00:06:17.260 --- 10.0.0.1 ping statistics --- 00:06:17.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.260 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:06:17.261 16:53:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:17.261 16:53:32 -- nvmf/common.sh@411 -- # return 0 00:06:17.261 16:53:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:17.261 16:53:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:17.261 16:53:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:17.261 16:53:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:17.261 16:53:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:17.261 16:53:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:17.261 16:53:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:17.261 16:53:32 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:17.261 16:53:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:17.261 16:53:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:17.261 16:53:32 -- common/autotest_common.sh@10 -- # set +x 00:06:17.261 16:53:32 -- nvmf/common.sh@470 -- # nvmfpid=1594542 00:06:17.261 16:53:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:17.261 16:53:32 -- nvmf/common.sh@471 -- # waitforlisten 1594542 00:06:17.261 16:53:32 -- common/autotest_common.sh@817 -- # '[' -z 1594542 ']' 00:06:17.261 16:53:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.261 16:53:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:17.261 16:53:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.261 16:53:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:17.261 16:53:32 -- common/autotest_common.sh@10 -- # set +x 00:06:17.261 [2024-04-18 16:53:32.920646] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:06:17.261 [2024-04-18 16:53:32.920720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.261 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.519 [2024-04-18 16:53:32.993037] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.519 [2024-04-18 16:53:33.115735] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.519 [2024-04-18 16:53:33.115795] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.519 [2024-04-18 16:53:33.115823] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.519 [2024-04-18 16:53:33.115843] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.519 [2024-04-18 16:53:33.115862] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.519 [2024-04-18 16:53:33.115954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.519 [2024-04-18 16:53:33.116013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.519 [2024-04-18 16:53:33.116079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.519 [2024-04-18 16:53:33.116072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.777 16:53:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:17.777 16:53:33 -- common/autotest_common.sh@850 -- # return 0 00:06:17.777 16:53:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:17.777 16:53:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.777 16:53:33 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 [2024-04-18 16:53:33.271230] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@26 -- # seq 1 4 00:06:17.777 16:53:33 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:17.777 16:53:33 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 Null1 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 [2024-04-18 16:53:33.311607] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:17.777 16:53:33 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 Null2 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:17.777 16:53:33 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 Null3 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:17.777 16:53:33 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 Null4 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.777 16:53:33 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:17.777 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.777 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.778 16:53:33 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:18.036 00:06:18.036 Discovery Log Number of Records 6, Generation counter 6 00:06:18.036 =====Discovery Log Entry 0====== 00:06:18.036 trtype: tcp 00:06:18.036 adrfam: ipv4 00:06:18.036 subtype: current discovery subsystem 00:06:18.036 treq: not required 00:06:18.036 portid: 0 00:06:18.036 trsvcid: 4420 00:06:18.036 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:18.036 traddr: 10.0.0.2 00:06:18.036 eflags: explicit discovery connections, duplicate discovery information 00:06:18.036 sectype: none 00:06:18.036 =====Discovery Log Entry 1====== 00:06:18.036 trtype: tcp 00:06:18.036 adrfam: ipv4 00:06:18.036 subtype: nvme subsystem 00:06:18.036 treq: not required 00:06:18.036 portid: 0 00:06:18.036 trsvcid: 4420 00:06:18.036 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:18.036 traddr: 10.0.0.2 00:06:18.036 eflags: none 00:06:18.036 sectype: none 00:06:18.036 =====Discovery Log Entry 2====== 00:06:18.036 trtype: tcp 00:06:18.036 adrfam: ipv4 00:06:18.036 subtype: nvme subsystem 00:06:18.036 treq: not required 00:06:18.036 portid: 0 00:06:18.036 trsvcid: 4420 00:06:18.036 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:18.036 traddr: 10.0.0.2 00:06:18.036 eflags: none 00:06:18.036 sectype: none 00:06:18.036 =====Discovery Log Entry 3====== 00:06:18.036 trtype: tcp 00:06:18.036 adrfam: ipv4 00:06:18.036 subtype: nvme subsystem 00:06:18.036 treq: not required 00:06:18.036 portid: 0 00:06:18.036 trsvcid: 4420 00:06:18.036 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:18.036 traddr: 10.0.0.2 00:06:18.036 eflags: none 00:06:18.036 sectype: none 00:06:18.036 =====Discovery Log Entry 4====== 00:06:18.036 trtype: tcp 00:06:18.036 adrfam: ipv4 00:06:18.036 subtype: nvme subsystem 00:06:18.036 treq: not required 00:06:18.036 portid: 0 00:06:18.036 trsvcid: 4420 00:06:18.036 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:18.036 traddr: 10.0.0.2 00:06:18.036 eflags: none 00:06:18.036 sectype: none 00:06:18.036 =====Discovery Log Entry 5====== 00:06:18.036 trtype: tcp 00:06:18.036 adrfam: ipv4 00:06:18.036 subtype: discovery subsystem referral 00:06:18.036 treq: not required 00:06:18.036 portid: 0 00:06:18.036 trsvcid: 4430 00:06:18.036 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:18.036 traddr: 10.0.0.2 00:06:18.036 eflags: none 00:06:18.036 sectype: none 00:06:18.036 16:53:33 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:18.036 Perform nvmf subsystem discovery via RPC 00:06:18.036 16:53:33 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:18.036 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.036 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.036 [2024-04-18 16:53:33.536025] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:06:18.036 [ 00:06:18.036 { 00:06:18.036 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:18.036 "subtype": "Discovery", 00:06:18.036 "listen_addresses": [ 00:06:18.036 { 00:06:18.036 "transport": "TCP", 00:06:18.036 "trtype": "TCP", 00:06:18.036 "adrfam": "IPv4", 00:06:18.036 "traddr": "10.0.0.2", 00:06:18.036 "trsvcid": "4420" 00:06:18.036 } 00:06:18.036 ], 00:06:18.036 "allow_any_host": true, 00:06:18.036 "hosts": [] 00:06:18.036 }, 00:06:18.036 { 00:06:18.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:18.036 "subtype": "NVMe", 00:06:18.036 "listen_addresses": [ 00:06:18.036 { 00:06:18.036 "transport": "TCP", 00:06:18.036 "trtype": "TCP", 00:06:18.036 "adrfam": "IPv4", 00:06:18.036 "traddr": "10.0.0.2", 00:06:18.036 "trsvcid": "4420" 00:06:18.036 } 00:06:18.036 ], 00:06:18.036 "allow_any_host": true, 00:06:18.036 "hosts": [], 00:06:18.036 "serial_number": "SPDK00000000000001", 00:06:18.036 "model_number": "SPDK bdev Controller", 00:06:18.036 "max_namespaces": 32, 00:06:18.036 "min_cntlid": 1, 00:06:18.036 "max_cntlid": 65519, 00:06:18.036 "namespaces": [ 00:06:18.036 { 00:06:18.036 "nsid": 1, 00:06:18.036 "bdev_name": "Null1", 00:06:18.036 "name": "Null1", 00:06:18.036 "nguid": "832F9042AC774C83AFD4E9FE021F225D", 00:06:18.036 "uuid": "832f9042-ac77-4c83-afd4-e9fe021f225d" 00:06:18.036 } 00:06:18.036 ] 00:06:18.036 }, 00:06:18.036 { 00:06:18.036 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:18.036 "subtype": "NVMe", 00:06:18.036 "listen_addresses": [ 00:06:18.036 { 00:06:18.036 "transport": "TCP", 00:06:18.036 "trtype": "TCP", 00:06:18.036 "adrfam": "IPv4", 00:06:18.036 "traddr": "10.0.0.2", 00:06:18.036 "trsvcid": "4420" 00:06:18.036 } 00:06:18.036 ], 00:06:18.036 "allow_any_host": true, 00:06:18.036 "hosts": [], 00:06:18.036 "serial_number": "SPDK00000000000002", 00:06:18.036 "model_number": "SPDK bdev Controller", 00:06:18.036 "max_namespaces": 32, 00:06:18.036 "min_cntlid": 1, 00:06:18.036 "max_cntlid": 65519, 00:06:18.036 "namespaces": [ 00:06:18.036 { 00:06:18.036 "nsid": 1, 00:06:18.036 "bdev_name": "Null2", 00:06:18.036 "name": "Null2", 00:06:18.036 "nguid": "70DCAF626F7C4D8DA1EDA7008BAE7775", 00:06:18.036 "uuid": "70dcaf62-6f7c-4d8d-a1ed-a7008bae7775" 00:06:18.036 } 00:06:18.036 ] 00:06:18.036 }, 00:06:18.036 { 00:06:18.036 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:18.036 "subtype": "NVMe", 00:06:18.036 "listen_addresses": [ 00:06:18.036 { 00:06:18.036 "transport": "TCP", 00:06:18.036 "trtype": "TCP", 00:06:18.036 "adrfam": "IPv4", 00:06:18.036 "traddr": "10.0.0.2", 00:06:18.036 "trsvcid": "4420" 00:06:18.036 } 00:06:18.036 ], 00:06:18.036 "allow_any_host": true, 00:06:18.036 "hosts": [], 00:06:18.036 "serial_number": "SPDK00000000000003", 00:06:18.036 "model_number": "SPDK bdev Controller", 00:06:18.036 "max_namespaces": 32, 00:06:18.036 "min_cntlid": 1, 00:06:18.036 "max_cntlid": 65519, 00:06:18.036 "namespaces": [ 00:06:18.036 { 00:06:18.036 "nsid": 1, 00:06:18.036 "bdev_name": "Null3", 00:06:18.036 "name": "Null3", 00:06:18.036 "nguid": "35CE93CF19C54DC4A83D49E0E2EDD375", 00:06:18.036 "uuid": "35ce93cf-19c5-4dc4-a83d-49e0e2edd375" 00:06:18.036 } 00:06:18.036 ] 00:06:18.036 }, 00:06:18.036 { 00:06:18.036 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:18.036 "subtype": "NVMe", 00:06:18.036 "listen_addresses": [ 00:06:18.036 { 00:06:18.036 "transport": "TCP", 00:06:18.036 "trtype": "TCP", 00:06:18.036 "adrfam": "IPv4", 00:06:18.036 "traddr": "10.0.0.2", 00:06:18.036 "trsvcid": "4420" 00:06:18.036 } 00:06:18.036 ], 00:06:18.036 "allow_any_host": true, 00:06:18.036 "hosts": [], 00:06:18.036 "serial_number": "SPDK00000000000004", 00:06:18.036 "model_number": "SPDK bdev Controller", 00:06:18.036 "max_namespaces": 32, 00:06:18.036 "min_cntlid": 1, 00:06:18.036 "max_cntlid": 65519, 00:06:18.036 "namespaces": [ 00:06:18.036 { 00:06:18.036 "nsid": 1, 00:06:18.036 "bdev_name": "Null4", 00:06:18.036 "name": "Null4", 00:06:18.036 "nguid": "2E7087B3004648F98F5E5C05E8DCEF47", 00:06:18.036 "uuid": "2e7087b3-0046-48f9-8f5e-5c05e8dcef47" 00:06:18.036 } 00:06:18.036 ] 00:06:18.036 } 00:06:18.036 ] 00:06:18.036 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.036 16:53:33 -- target/discovery.sh@42 -- # seq 1 4 00:06:18.036 16:53:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:18.036 16:53:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:18.036 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.036 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.036 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.036 16:53:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:18.036 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.036 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.036 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.036 16:53:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:18.036 16:53:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:18.036 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.036 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.036 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.036 16:53:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:18.036 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.036 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.036 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.036 16:53:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:18.036 16:53:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:18.036 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.036 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.036 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.036 16:53:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:18.036 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.036 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.036 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.036 16:53:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:18.036 16:53:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:18.036 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.036 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.036 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.036 16:53:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:18.037 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.037 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.037 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.037 16:53:33 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:18.037 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.037 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.037 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.037 16:53:33 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:18.037 16:53:33 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:18.037 16:53:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.037 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.037 16:53:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.037 16:53:33 -- target/discovery.sh@49 -- # check_bdevs= 00:06:18.037 16:53:33 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:18.037 16:53:33 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:18.037 16:53:33 -- target/discovery.sh@57 -- # nvmftestfini 00:06:18.037 16:53:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:18.037 16:53:33 -- nvmf/common.sh@117 -- # sync 00:06:18.037 16:53:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:18.037 16:53:33 -- nvmf/common.sh@120 -- # set +e 00:06:18.037 16:53:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:18.037 16:53:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:18.037 rmmod nvme_tcp 00:06:18.037 rmmod nvme_fabrics 00:06:18.037 rmmod nvme_keyring 00:06:18.037 16:53:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:18.037 16:53:33 -- nvmf/common.sh@124 -- # set -e 00:06:18.037 16:53:33 -- nvmf/common.sh@125 -- # return 0 00:06:18.037 16:53:33 -- nvmf/common.sh@478 -- # '[' -n 1594542 ']' 00:06:18.037 16:53:33 -- nvmf/common.sh@479 -- # killprocess 1594542 00:06:18.037 16:53:33 -- common/autotest_common.sh@936 -- # '[' -z 1594542 ']' 00:06:18.037 16:53:33 -- common/autotest_common.sh@940 -- # kill -0 1594542 00:06:18.037 16:53:33 -- common/autotest_common.sh@941 -- # uname 00:06:18.037 16:53:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.037 16:53:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1594542 00:06:18.295 16:53:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.295 16:53:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.295 16:53:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1594542' 00:06:18.295 killing process with pid 1594542 00:06:18.295 16:53:33 -- common/autotest_common.sh@955 -- # kill 1594542 00:06:18.296 [2024-04-18 16:53:33.747515] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:06:18.296 16:53:33 -- common/autotest_common.sh@960 -- # wait 1594542 00:06:18.554 16:53:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:18.554 16:53:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:18.554 16:53:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:18.554 16:53:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:18.554 16:53:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:18.554 16:53:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.554 16:53:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:18.554 16:53:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.460 16:53:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:20.460 00:06:20.460 real 0m5.499s 00:06:20.460 user 0m4.364s 00:06:20.460 sys 0m1.847s 00:06:20.460 16:53:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.460 16:53:36 -- common/autotest_common.sh@10 -- # set +x 00:06:20.460 ************************************ 00:06:20.460 END TEST nvmf_discovery 00:06:20.460 ************************************ 00:06:20.460 16:53:36 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:20.460 16:53:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:20.460 16:53:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.460 16:53:36 -- common/autotest_common.sh@10 -- # set +x 00:06:20.726 ************************************ 00:06:20.726 START TEST nvmf_referrals 00:06:20.726 ************************************ 00:06:20.726 16:53:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:20.726 * Looking for test storage... 00:06:20.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.726 16:53:36 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.726 16:53:36 -- nvmf/common.sh@7 -- # uname -s 00:06:20.726 16:53:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.726 16:53:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.726 16:53:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.726 16:53:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.726 16:53:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.726 16:53:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.727 16:53:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.727 16:53:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.727 16:53:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.727 16:53:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.727 16:53:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.727 16:53:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.727 16:53:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.727 16:53:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.727 16:53:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.727 16:53:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.727 16:53:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.727 16:53:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.727 16:53:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.727 16:53:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.727 16:53:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.727 16:53:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.727 16:53:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.727 16:53:36 -- paths/export.sh@5 -- # export PATH 00:06:20.727 16:53:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.727 16:53:36 -- nvmf/common.sh@47 -- # : 0 00:06:20.727 16:53:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:20.727 16:53:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:20.727 16:53:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.727 16:53:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.727 16:53:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.727 16:53:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:20.727 16:53:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:20.727 16:53:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:20.727 16:53:36 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:20.727 16:53:36 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:20.727 16:53:36 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:20.727 16:53:36 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:20.727 16:53:36 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:20.727 16:53:36 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:20.727 16:53:36 -- target/referrals.sh@37 -- # nvmftestinit 00:06:20.727 16:53:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:20.727 16:53:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.727 16:53:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:20.727 16:53:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:20.727 16:53:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:20.727 16:53:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.727 16:53:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:20.727 16:53:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.727 16:53:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:20.727 16:53:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:20.727 16:53:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:20.727 16:53:36 -- common/autotest_common.sh@10 -- # set +x 00:06:22.636 16:53:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:22.637 16:53:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:22.637 16:53:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:22.637 16:53:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:22.637 16:53:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:22.637 16:53:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:22.637 16:53:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:22.637 16:53:38 -- nvmf/common.sh@295 -- # net_devs=() 00:06:22.637 16:53:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:22.637 16:53:38 -- nvmf/common.sh@296 -- # e810=() 00:06:22.637 16:53:38 -- nvmf/common.sh@296 -- # local -ga e810 00:06:22.637 16:53:38 -- nvmf/common.sh@297 -- # x722=() 00:06:22.637 16:53:38 -- nvmf/common.sh@297 -- # local -ga x722 00:06:22.637 16:53:38 -- nvmf/common.sh@298 -- # mlx=() 00:06:22.637 16:53:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:22.637 16:53:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.637 16:53:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.637 16:53:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.637 16:53:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.637 16:53:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.637 16:53:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.637 16:53:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.637 16:53:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.637 16:53:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.637 16:53:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.637 16:53:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.637 16:53:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:22.637 16:53:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:22.637 16:53:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:22.637 16:53:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.637 16:53:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:22.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:22.637 16:53:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.637 16:53:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:22.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:22.637 16:53:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:22.637 16:53:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.637 16:53:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.637 16:53:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:22.637 16:53:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.637 16:53:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:22.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:22.637 16:53:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.637 16:53:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.637 16:53:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.637 16:53:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:22.637 16:53:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.637 16:53:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:22.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:22.637 16:53:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.637 16:53:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:22.637 16:53:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:22.637 16:53:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:22.637 16:53:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:22.637 16:53:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.637 16:53:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.637 16:53:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.637 16:53:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:22.637 16:53:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.637 16:53:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.637 16:53:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:22.637 16:53:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.637 16:53:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.637 16:53:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:22.637 16:53:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:22.637 16:53:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.637 16:53:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.896 16:53:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.896 16:53:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.896 16:53:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:22.896 16:53:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.896 16:53:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.896 16:53:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.896 16:53:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:22.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:06:22.896 00:06:22.896 --- 10.0.0.2 ping statistics --- 00:06:22.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.896 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:06:22.896 16:53:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:06:22.896 00:06:22.896 --- 10.0.0.1 ping statistics --- 00:06:22.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.896 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:06:22.896 16:53:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.896 16:53:38 -- nvmf/common.sh@411 -- # return 0 00:06:22.896 16:53:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:22.896 16:53:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.896 16:53:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:22.896 16:53:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:22.896 16:53:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.896 16:53:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:22.896 16:53:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:22.896 16:53:38 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:22.896 16:53:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:22.896 16:53:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:22.896 16:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:22.896 16:53:38 -- nvmf/common.sh@470 -- # nvmfpid=1596632 00:06:22.896 16:53:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:22.896 16:53:38 -- nvmf/common.sh@471 -- # waitforlisten 1596632 00:06:22.896 16:53:38 -- common/autotest_common.sh@817 -- # '[' -z 1596632 ']' 00:06:22.896 16:53:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.896 16:53:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:22.896 16:53:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.896 16:53:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:22.896 16:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:22.896 [2024-04-18 16:53:38.486125] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:06:22.896 [2024-04-18 16:53:38.486209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.896 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.896 [2024-04-18 16:53:38.548844] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.154 [2024-04-18 16:53:38.658057] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.154 [2024-04-18 16:53:38.658122] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.154 [2024-04-18 16:53:38.658144] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.154 [2024-04-18 16:53:38.658163] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.154 [2024-04-18 16:53:38.658193] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.154 [2024-04-18 16:53:38.658286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.154 [2024-04-18 16:53:38.658363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.154 [2024-04-18 16:53:38.658322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.154 [2024-04-18 16:53:38.658367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.154 16:53:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:23.154 16:53:38 -- common/autotest_common.sh@850 -- # return 0 00:06:23.154 16:53:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:23.154 16:53:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:23.155 16:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.155 16:53:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.155 16:53:38 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:23.155 16:53:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.155 16:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.155 [2024-04-18 16:53:38.822009] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.155 16:53:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.155 16:53:38 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:23.155 16:53:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.155 16:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.155 [2024-04-18 16:53:38.834246] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:23.155 16:53:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.155 16:53:38 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:23.155 16:53:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.155 16:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.155 16:53:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.155 16:53:38 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:23.155 16:53:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.155 16:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.155 16:53:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.155 16:53:38 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:23.155 16:53:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.155 16:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.413 16:53:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.413 16:53:38 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.413 16:53:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.413 16:53:38 -- target/referrals.sh@48 -- # jq length 00:06:23.413 16:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.413 16:53:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.413 16:53:38 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:23.413 16:53:38 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:23.413 16:53:38 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:23.413 16:53:38 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.413 16:53:38 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:23.413 16:53:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.413 16:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.413 16:53:38 -- target/referrals.sh@21 -- # sort 00:06:23.413 16:53:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.413 16:53:38 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:23.413 16:53:38 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:23.413 16:53:38 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:23.413 16:53:38 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.413 16:53:38 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.413 16:53:38 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.413 16:53:38 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.413 16:53:38 -- target/referrals.sh@26 -- # sort 00:06:23.413 16:53:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:23.413 16:53:39 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:23.413 16:53:39 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:23.413 16:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.413 16:53:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.413 16:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.413 16:53:39 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:23.413 16:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.413 16:53:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.413 16:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.413 16:53:39 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:23.413 16:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.413 16:53:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.670 16:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.670 16:53:39 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.670 16:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.670 16:53:39 -- target/referrals.sh@56 -- # jq length 00:06:23.670 16:53:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.670 16:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.670 16:53:39 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:23.670 16:53:39 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:23.670 16:53:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.670 16:53:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.670 16:53:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.670 16:53:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.670 16:53:39 -- target/referrals.sh@26 -- # sort 00:06:23.670 16:53:39 -- target/referrals.sh@26 -- # echo 00:06:23.670 16:53:39 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:23.670 16:53:39 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:23.670 16:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.670 16:53:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.670 16:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.670 16:53:39 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:23.670 16:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.670 16:53:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.670 16:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.670 16:53:39 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:23.670 16:53:39 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:23.670 16:53:39 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.670 16:53:39 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:23.670 16:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.670 16:53:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.670 16:53:39 -- target/referrals.sh@21 -- # sort 00:06:23.670 16:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.670 16:53:39 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:23.670 16:53:39 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:23.670 16:53:39 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:23.670 16:53:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.670 16:53:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.670 16:53:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.670 16:53:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.670 16:53:39 -- target/referrals.sh@26 -- # sort 00:06:23.928 16:53:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:23.928 16:53:39 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:23.928 16:53:39 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:23.928 16:53:39 -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:23.928 16:53:39 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:23.928 16:53:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.928 16:53:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:23.928 16:53:39 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:23.928 16:53:39 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:23.928 16:53:39 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:23.928 16:53:39 -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:23.928 16:53:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.928 16:53:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:23.928 16:53:39 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:23.928 16:53:39 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:23.928 16:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.928 16:53:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.928 16:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.928 16:53:39 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:23.928 16:53:39 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:23.928 16:53:39 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.928 16:53:39 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:23.928 16:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.928 16:53:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.928 16:53:39 -- target/referrals.sh@21 -- # sort 00:06:24.186 16:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.186 16:53:39 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:24.186 16:53:39 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:24.186 16:53:39 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:24.186 16:53:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:24.186 16:53:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:24.186 16:53:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.186 16:53:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:24.186 16:53:39 -- target/referrals.sh@26 -- # sort 00:06:24.186 16:53:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:24.186 16:53:39 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:24.186 16:53:39 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:24.186 16:53:39 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:24.186 16:53:39 -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:24.186 16:53:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.186 16:53:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:24.186 16:53:39 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:24.186 16:53:39 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:24.186 16:53:39 -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:24.186 16:53:39 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:24.186 16:53:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.186 16:53:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:24.444 16:53:39 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:24.444 16:53:39 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:24.444 16:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.444 16:53:39 -- common/autotest_common.sh@10 -- # set +x 00:06:24.444 16:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.444 16:53:39 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:24.444 16:53:39 -- target/referrals.sh@82 -- # jq length 00:06:24.445 16:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.445 16:53:39 -- common/autotest_common.sh@10 -- # set +x 00:06:24.445 16:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.445 16:53:39 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:24.445 16:53:39 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:24.445 16:53:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:24.445 16:53:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:24.445 16:53:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.445 16:53:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:24.445 16:53:39 -- target/referrals.sh@26 -- # sort 00:06:24.445 16:53:40 -- target/referrals.sh@26 -- # echo 00:06:24.445 16:53:40 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:24.445 16:53:40 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:24.445 16:53:40 -- target/referrals.sh@86 -- # nvmftestfini 00:06:24.445 16:53:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:24.445 16:53:40 -- nvmf/common.sh@117 -- # sync 00:06:24.445 16:53:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:24.445 16:53:40 -- nvmf/common.sh@120 -- # set +e 00:06:24.445 16:53:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:24.445 16:53:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:24.445 rmmod nvme_tcp 00:06:24.445 rmmod nvme_fabrics 00:06:24.445 rmmod nvme_keyring 00:06:24.445 16:53:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:24.445 16:53:40 -- nvmf/common.sh@124 -- # set -e 00:06:24.445 16:53:40 -- nvmf/common.sh@125 -- # return 0 00:06:24.445 16:53:40 -- nvmf/common.sh@478 -- # '[' -n 1596632 ']' 00:06:24.445 16:53:40 -- nvmf/common.sh@479 -- # killprocess 1596632 00:06:24.445 16:53:40 -- common/autotest_common.sh@936 -- # '[' -z 1596632 ']' 00:06:24.445 16:53:40 -- common/autotest_common.sh@940 -- # kill -0 1596632 00:06:24.445 16:53:40 -- common/autotest_common.sh@941 -- # uname 00:06:24.445 16:53:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.445 16:53:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1596632 00:06:24.445 16:53:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.445 16:53:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.445 16:53:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1596632' 00:06:24.445 killing process with pid 1596632 00:06:24.445 16:53:40 -- common/autotest_common.sh@955 -- # kill 1596632 00:06:24.445 16:53:40 -- common/autotest_common.sh@960 -- # wait 1596632 00:06:24.703 16:53:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:24.703 16:53:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:24.703 16:53:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:24.703 16:53:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:24.703 16:53:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:24.703 16:53:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.703 16:53:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:24.703 16:53:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.274 16:53:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:27.274 00:06:27.274 real 0m6.247s 00:06:27.274 user 0m8.134s 00:06:27.274 sys 0m1.981s 00:06:27.274 16:53:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.274 16:53:42 -- common/autotest_common.sh@10 -- # set +x 00:06:27.274 ************************************ 00:06:27.274 END TEST nvmf_referrals 00:06:27.274 ************************************ 00:06:27.274 16:53:42 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:27.274 16:53:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:27.274 16:53:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.274 16:53:42 -- common/autotest_common.sh@10 -- # set +x 00:06:27.274 ************************************ 00:06:27.274 START TEST nvmf_connect_disconnect 00:06:27.274 ************************************ 00:06:27.274 16:53:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:27.274 * Looking for test storage... 00:06:27.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.274 16:53:42 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.274 16:53:42 -- nvmf/common.sh@7 -- # uname -s 00:06:27.274 16:53:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.274 16:53:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.274 16:53:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.274 16:53:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.274 16:53:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.274 16:53:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.274 16:53:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.274 16:53:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.274 16:53:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.274 16:53:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.274 16:53:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.274 16:53:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.274 16:53:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.274 16:53:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.274 16:53:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.274 16:53:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.274 16:53:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.274 16:53:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.274 16:53:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.274 16:53:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.274 16:53:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.274 16:53:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.274 16:53:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.274 16:53:42 -- paths/export.sh@5 -- # export PATH 00:06:27.274 16:53:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.274 16:53:42 -- nvmf/common.sh@47 -- # : 0 00:06:27.274 16:53:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.274 16:53:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.274 16:53:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.274 16:53:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.274 16:53:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.274 16:53:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.274 16:53:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.274 16:53:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.274 16:53:42 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:27.274 16:53:42 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:27.274 16:53:42 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:27.274 16:53:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:27.274 16:53:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.274 16:53:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:27.274 16:53:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:27.274 16:53:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:27.274 16:53:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.274 16:53:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:27.274 16:53:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.274 16:53:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:27.274 16:53:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:27.274 16:53:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:27.274 16:53:42 -- common/autotest_common.sh@10 -- # set +x 00:06:29.175 16:53:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:29.175 16:53:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:29.175 16:53:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:29.175 16:53:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:29.175 16:53:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:29.175 16:53:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:29.175 16:53:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:29.175 16:53:44 -- nvmf/common.sh@295 -- # net_devs=() 00:06:29.175 16:53:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:29.175 16:53:44 -- nvmf/common.sh@296 -- # e810=() 00:06:29.175 16:53:44 -- nvmf/common.sh@296 -- # local -ga e810 00:06:29.175 16:53:44 -- nvmf/common.sh@297 -- # x722=() 00:06:29.175 16:53:44 -- nvmf/common.sh@297 -- # local -ga x722 00:06:29.175 16:53:44 -- nvmf/common.sh@298 -- # mlx=() 00:06:29.175 16:53:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:29.175 16:53:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.175 16:53:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.175 16:53:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.175 16:53:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.175 16:53:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.175 16:53:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.175 16:53:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.175 16:53:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.175 16:53:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.175 16:53:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.175 16:53:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.175 16:53:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:29.175 16:53:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:29.175 16:53:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:29.175 16:53:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:29.175 16:53:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:29.175 16:53:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:29.175 16:53:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:29.175 16:53:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:29.175 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:29.175 16:53:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:29.175 16:53:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:29.176 16:53:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:29.176 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:29.176 16:53:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:29.176 16:53:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:29.176 16:53:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.176 16:53:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:29.176 16:53:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.176 16:53:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:29.176 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:29.176 16:53:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.176 16:53:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:29.176 16:53:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.176 16:53:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:29.176 16:53:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.176 16:53:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:29.176 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:29.176 16:53:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.176 16:53:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:29.176 16:53:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:29.176 16:53:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:29.176 16:53:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.176 16:53:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.176 16:53:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.176 16:53:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:29.176 16:53:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.176 16:53:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.176 16:53:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:29.176 16:53:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.176 16:53:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.176 16:53:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:29.176 16:53:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:29.176 16:53:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.176 16:53:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.176 16:53:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.176 16:53:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.176 16:53:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:29.176 16:53:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.176 16:53:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.176 16:53:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.176 16:53:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:29.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:06:29.176 00:06:29.176 --- 10.0.0.2 ping statistics --- 00:06:29.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.176 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:06:29.176 16:53:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:06:29.176 00:06:29.176 --- 10.0.0.1 ping statistics --- 00:06:29.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.176 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:06:29.176 16:53:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.176 16:53:44 -- nvmf/common.sh@411 -- # return 0 00:06:29.176 16:53:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:29.176 16:53:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.176 16:53:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:29.176 16:53:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.176 16:53:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:29.176 16:53:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:29.176 16:53:44 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:29.176 16:53:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:29.176 16:53:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:29.176 16:53:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.176 16:53:44 -- nvmf/common.sh@470 -- # nvmfpid=1598931 00:06:29.176 16:53:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:29.176 16:53:44 -- nvmf/common.sh@471 -- # waitforlisten 1598931 00:06:29.176 16:53:44 -- common/autotest_common.sh@817 -- # '[' -z 1598931 ']' 00:06:29.176 16:53:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.176 16:53:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:29.176 16:53:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.176 16:53:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:29.176 16:53:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.176 [2024-04-18 16:53:44.849135] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:06:29.176 [2024-04-18 16:53:44.849228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.435 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.435 [2024-04-18 16:53:44.913773] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.435 [2024-04-18 16:53:45.024643] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.435 [2024-04-18 16:53:45.024703] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.435 [2024-04-18 16:53:45.024725] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.435 [2024-04-18 16:53:45.024744] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.435 [2024-04-18 16:53:45.024759] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.435 [2024-04-18 16:53:45.024841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.435 [2024-04-18 16:53:45.024906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.435 [2024-04-18 16:53:45.024946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.435 [2024-04-18 16:53:45.024952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.693 16:53:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:29.693 16:53:45 -- common/autotest_common.sh@850 -- # return 0 00:06:29.693 16:53:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:29.693 16:53:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:29.693 16:53:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.693 16:53:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.693 16:53:45 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:29.693 16:53:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.693 16:53:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.693 [2024-04-18 16:53:45.182201] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.693 16:53:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.693 16:53:45 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:29.693 16:53:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.693 16:53:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.693 16:53:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.693 16:53:45 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:29.693 16:53:45 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:29.693 16:53:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.693 16:53:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.693 16:53:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.693 16:53:45 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:29.693 16:53:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.693 16:53:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.693 16:53:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.693 16:53:45 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:29.693 16:53:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.693 16:53:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.693 [2024-04-18 16:53:45.235933] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:29.693 16:53:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.693 16:53:45 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:29.693 16:53:45 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:29.693 16:53:45 -- target/connect_disconnect.sh@34 -- # set +x 00:06:32.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:35.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:38.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:40.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:43.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:43.074 16:53:58 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:43.074 16:53:58 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:43.074 16:53:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:43.074 16:53:58 -- nvmf/common.sh@117 -- # sync 00:06:43.074 16:53:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:43.074 16:53:58 -- nvmf/common.sh@120 -- # set +e 00:06:43.074 16:53:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:43.074 16:53:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:43.074 rmmod nvme_tcp 00:06:43.074 rmmod nvme_fabrics 00:06:43.074 rmmod nvme_keyring 00:06:43.074 16:53:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:43.074 16:53:58 -- nvmf/common.sh@124 -- # set -e 00:06:43.074 16:53:58 -- nvmf/common.sh@125 -- # return 0 00:06:43.074 16:53:58 -- nvmf/common.sh@478 -- # '[' -n 1598931 ']' 00:06:43.074 16:53:58 -- nvmf/common.sh@479 -- # killprocess 1598931 00:06:43.074 16:53:58 -- common/autotest_common.sh@936 -- # '[' -z 1598931 ']' 00:06:43.074 16:53:58 -- common/autotest_common.sh@940 -- # kill -0 1598931 00:06:43.074 16:53:58 -- common/autotest_common.sh@941 -- # uname 00:06:43.074 16:53:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:43.074 16:53:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1598931 00:06:43.074 16:53:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:43.074 16:53:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:43.074 16:53:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1598931' 00:06:43.074 killing process with pid 1598931 00:06:43.074 16:53:58 -- common/autotest_common.sh@955 -- # kill 1598931 00:06:43.074 16:53:58 -- common/autotest_common.sh@960 -- # wait 1598931 00:06:43.332 16:53:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:43.332 16:53:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:43.332 16:53:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:43.332 16:53:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:43.332 16:53:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:43.332 16:53:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.332 16:53:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.332 16:53:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.867 16:54:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:45.867 00:06:45.867 real 0m18.450s 00:06:45.867 user 0m55.094s 00:06:45.867 sys 0m3.202s 00:06:45.867 16:54:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:45.867 16:54:01 -- common/autotest_common.sh@10 -- # set +x 00:06:45.867 ************************************ 00:06:45.867 END TEST nvmf_connect_disconnect 00:06:45.867 ************************************ 00:06:45.867 16:54:01 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:45.867 16:54:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:45.867 16:54:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.867 16:54:01 -- common/autotest_common.sh@10 -- # set +x 00:06:45.867 ************************************ 00:06:45.867 START TEST nvmf_multitarget 00:06:45.867 ************************************ 00:06:45.867 16:54:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:45.867 * Looking for test storage... 00:06:45.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.867 16:54:01 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.867 16:54:01 -- nvmf/common.sh@7 -- # uname -s 00:06:45.867 16:54:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.867 16:54:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.867 16:54:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.867 16:54:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.867 16:54:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.867 16:54:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.867 16:54:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.867 16:54:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.867 16:54:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.867 16:54:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.867 16:54:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:45.867 16:54:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:45.867 16:54:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.867 16:54:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.867 16:54:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.867 16:54:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.867 16:54:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.867 16:54:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.867 16:54:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.867 16:54:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.867 16:54:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.867 16:54:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.867 16:54:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.867 16:54:01 -- paths/export.sh@5 -- # export PATH 00:06:45.867 16:54:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.867 16:54:01 -- nvmf/common.sh@47 -- # : 0 00:06:45.867 16:54:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.867 16:54:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.867 16:54:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.867 16:54:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.867 16:54:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.867 16:54:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.867 16:54:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.867 16:54:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.867 16:54:01 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:45.867 16:54:01 -- target/multitarget.sh@15 -- # nvmftestinit 00:06:45.867 16:54:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:45.867 16:54:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.867 16:54:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:45.867 16:54:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:45.867 16:54:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:45.867 16:54:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.867 16:54:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.867 16:54:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.867 16:54:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:45.867 16:54:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:45.867 16:54:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:45.867 16:54:01 -- common/autotest_common.sh@10 -- # set +x 00:06:47.772 16:54:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:47.772 16:54:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:47.772 16:54:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:47.772 16:54:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:47.772 16:54:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:47.772 16:54:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:47.772 16:54:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:47.772 16:54:03 -- nvmf/common.sh@295 -- # net_devs=() 00:06:47.772 16:54:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:47.772 16:54:03 -- nvmf/common.sh@296 -- # e810=() 00:06:47.772 16:54:03 -- nvmf/common.sh@296 -- # local -ga e810 00:06:47.772 16:54:03 -- nvmf/common.sh@297 -- # x722=() 00:06:47.772 16:54:03 -- nvmf/common.sh@297 -- # local -ga x722 00:06:47.772 16:54:03 -- nvmf/common.sh@298 -- # mlx=() 00:06:47.772 16:54:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:47.772 16:54:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.772 16:54:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.772 16:54:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.772 16:54:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.772 16:54:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.772 16:54:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.772 16:54:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.772 16:54:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.773 16:54:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.773 16:54:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.773 16:54:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.773 16:54:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:47.773 16:54:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:47.773 16:54:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:47.773 16:54:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.773 16:54:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:47.773 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:47.773 16:54:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.773 16:54:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:47.773 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:47.773 16:54:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:47.773 16:54:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.773 16:54:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.773 16:54:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:47.773 16:54:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.773 16:54:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:47.773 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:47.773 16:54:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.773 16:54:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.773 16:54:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.773 16:54:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:47.773 16:54:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.773 16:54:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:47.773 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:47.773 16:54:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.773 16:54:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:47.773 16:54:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:47.773 16:54:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:47.773 16:54:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.773 16:54:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.773 16:54:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.773 16:54:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:47.773 16:54:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.773 16:54:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.773 16:54:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:47.773 16:54:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.773 16:54:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.773 16:54:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:47.773 16:54:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:47.773 16:54:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.773 16:54:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:47.773 16:54:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:47.773 16:54:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:47.773 16:54:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:47.773 16:54:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:47.773 16:54:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:47.773 16:54:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:47.773 16:54:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:47.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:06:47.773 00:06:47.773 --- 10.0.0.2 ping statistics --- 00:06:47.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.773 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:06:47.773 16:54:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:47.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:06:47.773 00:06:47.773 --- 10.0.0.1 ping statistics --- 00:06:47.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.773 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:06:47.773 16:54:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:47.773 16:54:03 -- nvmf/common.sh@411 -- # return 0 00:06:47.773 16:54:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:47.773 16:54:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:47.773 16:54:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:47.773 16:54:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:47.773 16:54:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:47.773 16:54:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:47.773 16:54:03 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:47.773 16:54:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:47.773 16:54:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:47.773 16:54:03 -- common/autotest_common.sh@10 -- # set +x 00:06:47.773 16:54:03 -- nvmf/common.sh@470 -- # nvmfpid=1602698 00:06:47.773 16:54:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:47.773 16:54:03 -- nvmf/common.sh@471 -- # waitforlisten 1602698 00:06:47.773 16:54:03 -- common/autotest_common.sh@817 -- # '[' -z 1602698 ']' 00:06:47.773 16:54:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.773 16:54:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:47.773 16:54:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.773 16:54:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:47.773 16:54:03 -- common/autotest_common.sh@10 -- # set +x 00:06:47.773 [2024-04-18 16:54:03.453660] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:06:47.773 [2024-04-18 16:54:03.453753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.032 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.032 [2024-04-18 16:54:03.518931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.032 [2024-04-18 16:54:03.629602] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.032 [2024-04-18 16:54:03.629676] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.032 [2024-04-18 16:54:03.629698] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.032 [2024-04-18 16:54:03.629715] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.032 [2024-04-18 16:54:03.629729] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.032 [2024-04-18 16:54:03.629823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.032 [2024-04-18 16:54:03.629889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.032 [2024-04-18 16:54:03.629962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.032 [2024-04-18 16:54:03.629954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.289 16:54:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:48.289 16:54:03 -- common/autotest_common.sh@850 -- # return 0 00:06:48.289 16:54:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:48.289 16:54:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:48.289 16:54:03 -- common/autotest_common.sh@10 -- # set +x 00:06:48.289 16:54:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.289 16:54:03 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:48.289 16:54:03 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:48.289 16:54:03 -- target/multitarget.sh@21 -- # jq length 00:06:48.289 16:54:03 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:48.289 16:54:03 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:48.289 "nvmf_tgt_1" 00:06:48.547 16:54:03 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:48.547 "nvmf_tgt_2" 00:06:48.547 16:54:04 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:48.547 16:54:04 -- target/multitarget.sh@28 -- # jq length 00:06:48.547 16:54:04 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:48.547 16:54:04 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:48.805 true 00:06:48.805 16:54:04 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:48.805 true 00:06:48.805 16:54:04 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:48.805 16:54:04 -- target/multitarget.sh@35 -- # jq length 00:06:49.063 16:54:04 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:49.063 16:54:04 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:49.063 16:54:04 -- target/multitarget.sh@41 -- # nvmftestfini 00:06:49.063 16:54:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:49.063 16:54:04 -- nvmf/common.sh@117 -- # sync 00:06:49.063 16:54:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:49.063 16:54:04 -- nvmf/common.sh@120 -- # set +e 00:06:49.063 16:54:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:49.063 16:54:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:49.063 rmmod nvme_tcp 00:06:49.063 rmmod nvme_fabrics 00:06:49.063 rmmod nvme_keyring 00:06:49.063 16:54:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:49.063 16:54:04 -- nvmf/common.sh@124 -- # set -e 00:06:49.063 16:54:04 -- nvmf/common.sh@125 -- # return 0 00:06:49.063 16:54:04 -- nvmf/common.sh@478 -- # '[' -n 1602698 ']' 00:06:49.063 16:54:04 -- nvmf/common.sh@479 -- # killprocess 1602698 00:06:49.063 16:54:04 -- common/autotest_common.sh@936 -- # '[' -z 1602698 ']' 00:06:49.063 16:54:04 -- common/autotest_common.sh@940 -- # kill -0 1602698 00:06:49.063 16:54:04 -- common/autotest_common.sh@941 -- # uname 00:06:49.063 16:54:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.063 16:54:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1602698 00:06:49.063 16:54:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:49.063 16:54:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:49.063 16:54:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1602698' 00:06:49.063 killing process with pid 1602698 00:06:49.063 16:54:04 -- common/autotest_common.sh@955 -- # kill 1602698 00:06:49.063 16:54:04 -- common/autotest_common.sh@960 -- # wait 1602698 00:06:49.321 16:54:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:49.321 16:54:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:49.321 16:54:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:49.321 16:54:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:49.321 16:54:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:49.322 16:54:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.322 16:54:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:49.322 16:54:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.878 16:54:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:51.878 00:06:51.878 real 0m5.807s 00:06:51.878 user 0m6.447s 00:06:51.878 sys 0m1.953s 00:06:51.878 16:54:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:51.878 16:54:06 -- common/autotest_common.sh@10 -- # set +x 00:06:51.878 ************************************ 00:06:51.878 END TEST nvmf_multitarget 00:06:51.878 ************************************ 00:06:51.878 16:54:06 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:51.878 16:54:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:51.878 16:54:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.878 16:54:06 -- common/autotest_common.sh@10 -- # set +x 00:06:51.878 ************************************ 00:06:51.878 START TEST nvmf_rpc 00:06:51.878 ************************************ 00:06:51.878 16:54:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:51.878 * Looking for test storage... 00:06:51.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.878 16:54:07 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.878 16:54:07 -- nvmf/common.sh@7 -- # uname -s 00:06:51.879 16:54:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.879 16:54:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.879 16:54:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.879 16:54:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.879 16:54:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.879 16:54:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.879 16:54:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.879 16:54:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.879 16:54:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.879 16:54:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.879 16:54:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:51.879 16:54:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:51.879 16:54:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.879 16:54:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.879 16:54:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.879 16:54:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.879 16:54:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.879 16:54:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.879 16:54:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.879 16:54:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.879 16:54:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.879 16:54:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.879 16:54:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.879 16:54:07 -- paths/export.sh@5 -- # export PATH 00:06:51.879 16:54:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.879 16:54:07 -- nvmf/common.sh@47 -- # : 0 00:06:51.879 16:54:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.879 16:54:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.879 16:54:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.879 16:54:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.879 16:54:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.879 16:54:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.879 16:54:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.879 16:54:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.879 16:54:07 -- target/rpc.sh@11 -- # loops=5 00:06:51.879 16:54:07 -- target/rpc.sh@23 -- # nvmftestinit 00:06:51.879 16:54:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:51.879 16:54:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.879 16:54:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:51.879 16:54:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:51.879 16:54:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:51.879 16:54:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.879 16:54:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.879 16:54:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.879 16:54:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:51.879 16:54:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:51.879 16:54:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:51.879 16:54:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.782 16:54:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:53.782 16:54:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:53.782 16:54:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:53.782 16:54:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:53.782 16:54:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:53.782 16:54:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:53.782 16:54:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:53.782 16:54:09 -- nvmf/common.sh@295 -- # net_devs=() 00:06:53.782 16:54:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:53.782 16:54:09 -- nvmf/common.sh@296 -- # e810=() 00:06:53.782 16:54:09 -- nvmf/common.sh@296 -- # local -ga e810 00:06:53.782 16:54:09 -- nvmf/common.sh@297 -- # x722=() 00:06:53.782 16:54:09 -- nvmf/common.sh@297 -- # local -ga x722 00:06:53.782 16:54:09 -- nvmf/common.sh@298 -- # mlx=() 00:06:53.782 16:54:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:53.783 16:54:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.783 16:54:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.783 16:54:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.783 16:54:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.783 16:54:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.783 16:54:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.783 16:54:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.783 16:54:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.783 16:54:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.783 16:54:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.783 16:54:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.783 16:54:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:53.783 16:54:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:53.783 16:54:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:53.783 16:54:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.783 16:54:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:53.783 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:53.783 16:54:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.783 16:54:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:53.783 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:53.783 16:54:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:53.783 16:54:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.783 16:54:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.783 16:54:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:53.783 16:54:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.783 16:54:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:53.783 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:53.783 16:54:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.783 16:54:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.783 16:54:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.783 16:54:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:53.783 16:54:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.783 16:54:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:53.783 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:53.783 16:54:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.783 16:54:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:53.783 16:54:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:53.783 16:54:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:53.783 16:54:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.783 16:54:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.783 16:54:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.783 16:54:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:53.783 16:54:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.783 16:54:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.783 16:54:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:53.783 16:54:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.783 16:54:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.783 16:54:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:53.783 16:54:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:53.783 16:54:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.783 16:54:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.783 16:54:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.783 16:54:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.783 16:54:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:53.783 16:54:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.783 16:54:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.783 16:54:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.783 16:54:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:53.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:06:53.783 00:06:53.783 --- 10.0.0.2 ping statistics --- 00:06:53.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.783 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:06:53.783 16:54:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:06:53.783 00:06:53.783 --- 10.0.0.1 ping statistics --- 00:06:53.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.783 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:06:53.783 16:54:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.783 16:54:09 -- nvmf/common.sh@411 -- # return 0 00:06:53.783 16:54:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:53.783 16:54:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.783 16:54:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:53.783 16:54:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.783 16:54:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:53.783 16:54:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:53.783 16:54:09 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:06:53.783 16:54:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:53.783 16:54:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:53.783 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:53.783 16:54:09 -- nvmf/common.sh@470 -- # nvmfpid=1605309 00:06:53.783 16:54:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:53.783 16:54:09 -- nvmf/common.sh@471 -- # waitforlisten 1605309 00:06:53.783 16:54:09 -- common/autotest_common.sh@817 -- # '[' -z 1605309 ']' 00:06:53.783 16:54:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.783 16:54:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:53.783 16:54:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.783 16:54:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:53.783 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:53.783 [2024-04-18 16:54:09.300205] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:06:53.783 [2024-04-18 16:54:09.300291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.783 [2024-04-18 16:54:09.363895] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.783 [2024-04-18 16:54:09.472359] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.783 [2024-04-18 16:54:09.472424] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.783 [2024-04-18 16:54:09.472455] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.783 [2024-04-18 16:54:09.472475] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.783 [2024-04-18 16:54:09.472490] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.783 [2024-04-18 16:54:09.472561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.783 [2024-04-18 16:54:09.472591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.783 [2024-04-18 16:54:09.472650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.783 [2024-04-18 16:54:09.472655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.042 16:54:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:54.042 16:54:09 -- common/autotest_common.sh@850 -- # return 0 00:06:54.042 16:54:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:54.042 16:54:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:54.042 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.042 16:54:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.042 16:54:09 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:06:54.042 16:54:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.042 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.042 16:54:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.042 16:54:09 -- target/rpc.sh@26 -- # stats='{ 00:06:54.042 "tick_rate": 2700000000, 00:06:54.042 "poll_groups": [ 00:06:54.042 { 00:06:54.042 "name": "nvmf_tgt_poll_group_0", 00:06:54.042 "admin_qpairs": 0, 00:06:54.042 "io_qpairs": 0, 00:06:54.042 "current_admin_qpairs": 0, 00:06:54.042 "current_io_qpairs": 0, 00:06:54.042 "pending_bdev_io": 0, 00:06:54.042 "completed_nvme_io": 0, 00:06:54.042 "transports": [] 00:06:54.042 }, 00:06:54.042 { 00:06:54.042 "name": "nvmf_tgt_poll_group_1", 00:06:54.042 "admin_qpairs": 0, 00:06:54.042 "io_qpairs": 0, 00:06:54.042 "current_admin_qpairs": 0, 00:06:54.042 "current_io_qpairs": 0, 00:06:54.042 "pending_bdev_io": 0, 00:06:54.042 "completed_nvme_io": 0, 00:06:54.042 "transports": [] 00:06:54.042 }, 00:06:54.042 { 00:06:54.042 "name": "nvmf_tgt_poll_group_2", 00:06:54.042 "admin_qpairs": 0, 00:06:54.042 "io_qpairs": 0, 00:06:54.042 "current_admin_qpairs": 0, 00:06:54.042 "current_io_qpairs": 0, 00:06:54.042 "pending_bdev_io": 0, 00:06:54.042 "completed_nvme_io": 0, 00:06:54.042 "transports": [] 00:06:54.042 }, 00:06:54.042 { 00:06:54.042 "name": "nvmf_tgt_poll_group_3", 00:06:54.042 "admin_qpairs": 0, 00:06:54.042 "io_qpairs": 0, 00:06:54.042 "current_admin_qpairs": 0, 00:06:54.042 "current_io_qpairs": 0, 00:06:54.042 "pending_bdev_io": 0, 00:06:54.042 "completed_nvme_io": 0, 00:06:54.042 "transports": [] 00:06:54.042 } 00:06:54.042 ] 00:06:54.042 }' 00:06:54.042 16:54:09 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:06:54.042 16:54:09 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:06:54.042 16:54:09 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:06:54.042 16:54:09 -- target/rpc.sh@15 -- # wc -l 00:06:54.042 16:54:09 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:06:54.042 16:54:09 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:06:54.042 16:54:09 -- target/rpc.sh@29 -- # [[ null == null ]] 00:06:54.042 16:54:09 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:54.042 16:54:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.042 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.042 [2024-04-18 16:54:09.726496] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.042 16:54:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.042 16:54:09 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:06:54.042 16:54:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.042 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.042 16:54:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.042 16:54:09 -- target/rpc.sh@33 -- # stats='{ 00:06:54.042 "tick_rate": 2700000000, 00:06:54.042 "poll_groups": [ 00:06:54.042 { 00:06:54.042 "name": "nvmf_tgt_poll_group_0", 00:06:54.042 "admin_qpairs": 0, 00:06:54.042 "io_qpairs": 0, 00:06:54.042 "current_admin_qpairs": 0, 00:06:54.042 "current_io_qpairs": 0, 00:06:54.042 "pending_bdev_io": 0, 00:06:54.042 "completed_nvme_io": 0, 00:06:54.042 "transports": [ 00:06:54.042 { 00:06:54.042 "trtype": "TCP" 00:06:54.042 } 00:06:54.042 ] 00:06:54.042 }, 00:06:54.042 { 00:06:54.042 "name": "nvmf_tgt_poll_group_1", 00:06:54.042 "admin_qpairs": 0, 00:06:54.042 "io_qpairs": 0, 00:06:54.042 "current_admin_qpairs": 0, 00:06:54.042 "current_io_qpairs": 0, 00:06:54.042 "pending_bdev_io": 0, 00:06:54.042 "completed_nvme_io": 0, 00:06:54.042 "transports": [ 00:06:54.042 { 00:06:54.042 "trtype": "TCP" 00:06:54.042 } 00:06:54.042 ] 00:06:54.042 }, 00:06:54.042 { 00:06:54.042 "name": "nvmf_tgt_poll_group_2", 00:06:54.042 "admin_qpairs": 0, 00:06:54.042 "io_qpairs": 0, 00:06:54.042 "current_admin_qpairs": 0, 00:06:54.042 "current_io_qpairs": 0, 00:06:54.042 "pending_bdev_io": 0, 00:06:54.042 "completed_nvme_io": 0, 00:06:54.042 "transports": [ 00:06:54.042 { 00:06:54.042 "trtype": "TCP" 00:06:54.042 } 00:06:54.042 ] 00:06:54.042 }, 00:06:54.042 { 00:06:54.042 "name": "nvmf_tgt_poll_group_3", 00:06:54.042 "admin_qpairs": 0, 00:06:54.042 "io_qpairs": 0, 00:06:54.042 "current_admin_qpairs": 0, 00:06:54.042 "current_io_qpairs": 0, 00:06:54.042 "pending_bdev_io": 0, 00:06:54.042 "completed_nvme_io": 0, 00:06:54.042 "transports": [ 00:06:54.042 { 00:06:54.042 "trtype": "TCP" 00:06:54.042 } 00:06:54.042 ] 00:06:54.042 } 00:06:54.042 ] 00:06:54.042 }' 00:06:54.301 16:54:09 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:06:54.301 16:54:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:06:54.301 16:54:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:06:54.301 16:54:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:54.301 16:54:09 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:06:54.301 16:54:09 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:06:54.301 16:54:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:06:54.301 16:54:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:06:54.301 16:54:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:54.301 16:54:09 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:06:54.301 16:54:09 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:06:54.301 16:54:09 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:06:54.301 16:54:09 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:06:54.301 16:54:09 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:06:54.301 16:54:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.301 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.301 Malloc1 00:06:54.301 16:54:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.301 16:54:09 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:54.301 16:54:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.301 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.301 16:54:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.301 16:54:09 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:54.301 16:54:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.301 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.301 16:54:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.301 16:54:09 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:06:54.301 16:54:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.301 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.301 16:54:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.301 16:54:09 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:54.301 16:54:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.301 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.301 [2024-04-18 16:54:09.882300] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.301 16:54:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.301 16:54:09 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:54.301 16:54:09 -- common/autotest_common.sh@638 -- # local es=0 00:06:54.301 16:54:09 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:54.301 16:54:09 -- common/autotest_common.sh@626 -- # local arg=nvme 00:06:54.301 16:54:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:54.301 16:54:09 -- common/autotest_common.sh@630 -- # type -t nvme 00:06:54.301 16:54:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:54.301 16:54:09 -- common/autotest_common.sh@632 -- # type -P nvme 00:06:54.301 16:54:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:54.301 16:54:09 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:06:54.301 16:54:09 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:06:54.302 16:54:09 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:54.302 [2024-04-18 16:54:09.904752] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:54.302 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:54.302 could not add new controller: failed to write to nvme-fabrics device 00:06:54.302 16:54:09 -- common/autotest_common.sh@641 -- # es=1 00:06:54.302 16:54:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:54.302 16:54:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:54.302 16:54:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:54.302 16:54:09 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:54.302 16:54:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.302 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.302 16:54:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.302 16:54:09 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:54.867 16:54:10 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:06:54.867 16:54:10 -- common/autotest_common.sh@1184 -- # local i=0 00:06:54.867 16:54:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:54.867 16:54:10 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:54.867 16:54:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:56.765 16:54:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:56.765 16:54:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:57.023 16:54:12 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:57.023 16:54:12 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:57.023 16:54:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:57.023 16:54:12 -- common/autotest_common.sh@1194 -- # return 0 00:06:57.023 16:54:12 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:57.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.023 16:54:12 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:57.023 16:54:12 -- common/autotest_common.sh@1205 -- # local i=0 00:06:57.023 16:54:12 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:57.023 16:54:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:57.023 16:54:12 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:57.023 16:54:12 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:57.023 16:54:12 -- common/autotest_common.sh@1217 -- # return 0 00:06:57.023 16:54:12 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:57.023 16:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:57.023 16:54:12 -- common/autotest_common.sh@10 -- # set +x 00:06:57.023 16:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:57.023 16:54:12 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:57.023 16:54:12 -- common/autotest_common.sh@638 -- # local es=0 00:06:57.023 16:54:12 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:57.023 16:54:12 -- common/autotest_common.sh@626 -- # local arg=nvme 00:06:57.023 16:54:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:57.023 16:54:12 -- common/autotest_common.sh@630 -- # type -t nvme 00:06:57.023 16:54:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:57.023 16:54:12 -- common/autotest_common.sh@632 -- # type -P nvme 00:06:57.023 16:54:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:57.023 16:54:12 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:06:57.023 16:54:12 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:06:57.023 16:54:12 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:57.023 [2024-04-18 16:54:12.646604] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:57.023 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:57.023 could not add new controller: failed to write to nvme-fabrics device 00:06:57.023 16:54:12 -- common/autotest_common.sh@641 -- # es=1 00:06:57.023 16:54:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:57.023 16:54:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:57.023 16:54:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:57.023 16:54:12 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:06:57.023 16:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:57.023 16:54:12 -- common/autotest_common.sh@10 -- # set +x 00:06:57.023 16:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:57.023 16:54:12 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:57.589 16:54:13 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:06:57.589 16:54:13 -- common/autotest_common.sh@1184 -- # local i=0 00:06:57.589 16:54:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:57.589 16:54:13 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:57.589 16:54:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:00.112 16:54:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:00.112 16:54:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:00.112 16:54:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:00.112 16:54:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:00.112 16:54:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:00.112 16:54:15 -- common/autotest_common.sh@1194 -- # return 0 00:07:00.112 16:54:15 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:00.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:00.112 16:54:15 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:00.112 16:54:15 -- common/autotest_common.sh@1205 -- # local i=0 00:07:00.112 16:54:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:00.112 16:54:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:00.112 16:54:15 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:00.112 16:54:15 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:00.112 16:54:15 -- common/autotest_common.sh@1217 -- # return 0 00:07:00.112 16:54:15 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:00.112 16:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.112 16:54:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.112 16:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.112 16:54:15 -- target/rpc.sh@81 -- # seq 1 5 00:07:00.112 16:54:15 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:00.112 16:54:15 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:00.112 16:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.112 16:54:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.112 16:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.113 16:54:15 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:00.113 16:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.113 16:54:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.113 [2024-04-18 16:54:15.441509] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.113 16:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.113 16:54:15 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:00.113 16:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.113 16:54:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.113 16:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.113 16:54:15 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:00.113 16:54:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.113 16:54:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.113 16:54:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.113 16:54:15 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:00.677 16:54:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:00.677 16:54:16 -- common/autotest_common.sh@1184 -- # local i=0 00:07:00.677 16:54:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:00.677 16:54:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:00.677 16:54:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:02.591 16:54:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:02.591 16:54:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:02.591 16:54:18 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:02.591 16:54:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:02.591 16:54:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:02.591 16:54:18 -- common/autotest_common.sh@1194 -- # return 0 00:07:02.591 16:54:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:02.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:02.591 16:54:18 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:02.591 16:54:18 -- common/autotest_common.sh@1205 -- # local i=0 00:07:02.591 16:54:18 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:02.591 16:54:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:02.591 16:54:18 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:02.591 16:54:18 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:02.591 16:54:18 -- common/autotest_common.sh@1217 -- # return 0 00:07:02.591 16:54:18 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.591 16:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.591 16:54:18 -- common/autotest_common.sh@10 -- # set +x 00:07:02.591 16:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.591 16:54:18 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:02.591 16:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.591 16:54:18 -- common/autotest_common.sh@10 -- # set +x 00:07:02.591 16:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.591 16:54:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:02.591 16:54:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:02.591 16:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.591 16:54:18 -- common/autotest_common.sh@10 -- # set +x 00:07:02.591 16:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.591 16:54:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.591 16:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.591 16:54:18 -- common/autotest_common.sh@10 -- # set +x 00:07:02.591 [2024-04-18 16:54:18.213926] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.591 16:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.591 16:54:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:02.591 16:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.591 16:54:18 -- common/autotest_common.sh@10 -- # set +x 00:07:02.591 16:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.591 16:54:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:02.591 16:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.591 16:54:18 -- common/autotest_common.sh@10 -- # set +x 00:07:02.591 16:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.591 16:54:18 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:03.157 16:54:18 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:03.157 16:54:18 -- common/autotest_common.sh@1184 -- # local i=0 00:07:03.157 16:54:18 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:03.157 16:54:18 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:03.157 16:54:18 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:05.683 16:54:20 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:05.683 16:54:20 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:05.683 16:54:20 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:05.683 16:54:20 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:05.683 16:54:20 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:05.683 16:54:20 -- common/autotest_common.sh@1194 -- # return 0 00:07:05.683 16:54:20 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:05.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:05.683 16:54:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:05.683 16:54:20 -- common/autotest_common.sh@1205 -- # local i=0 00:07:05.683 16:54:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:05.683 16:54:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.683 16:54:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:05.683 16:54:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.683 16:54:20 -- common/autotest_common.sh@1217 -- # return 0 00:07:05.683 16:54:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.683 16:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.683 16:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.683 16:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.683 16:54:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.683 16:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.683 16:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.683 16:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.683 16:54:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:05.683 16:54:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:05.683 16:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.683 16:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.683 16:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.683 16:54:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.683 16:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.683 16:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.683 [2024-04-18 16:54:20.970161] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.683 16:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.683 16:54:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:05.683 16:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.683 16:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.683 16:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.683 16:54:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:05.683 16:54:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.683 16:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.683 16:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.683 16:54:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:05.942 16:54:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:05.942 16:54:21 -- common/autotest_common.sh@1184 -- # local i=0 00:07:05.942 16:54:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:05.942 16:54:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:05.942 16:54:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:08.469 16:54:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:08.469 16:54:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:08.469 16:54:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:08.469 16:54:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:08.469 16:54:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:08.469 16:54:23 -- common/autotest_common.sh@1194 -- # return 0 00:07:08.469 16:54:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:08.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:08.469 16:54:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:08.469 16:54:23 -- common/autotest_common.sh@1205 -- # local i=0 00:07:08.469 16:54:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:08.469 16:54:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:08.469 16:54:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:08.469 16:54:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:08.469 16:54:23 -- common/autotest_common.sh@1217 -- # return 0 00:07:08.469 16:54:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.469 16:54:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.469 16:54:23 -- common/autotest_common.sh@10 -- # set +x 00:07:08.469 16:54:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.469 16:54:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:08.469 16:54:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.469 16:54:23 -- common/autotest_common.sh@10 -- # set +x 00:07:08.469 16:54:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.469 16:54:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:08.469 16:54:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:08.469 16:54:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.469 16:54:23 -- common/autotest_common.sh@10 -- # set +x 00:07:08.469 16:54:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.469 16:54:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.469 16:54:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.469 16:54:23 -- common/autotest_common.sh@10 -- # set +x 00:07:08.469 [2024-04-18 16:54:23.732613] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.469 16:54:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.469 16:54:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:08.469 16:54:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.469 16:54:23 -- common/autotest_common.sh@10 -- # set +x 00:07:08.469 16:54:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.469 16:54:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:08.469 16:54:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.469 16:54:23 -- common/autotest_common.sh@10 -- # set +x 00:07:08.469 16:54:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.469 16:54:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.726 16:54:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.726 16:54:24 -- common/autotest_common.sh@1184 -- # local i=0 00:07:08.726 16:54:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.726 16:54:24 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:08.726 16:54:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:10.619 16:54:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:10.619 16:54:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:10.619 16:54:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.619 16:54:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:10.619 16:54:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.619 16:54:26 -- common/autotest_common.sh@1194 -- # return 0 00:07:10.620 16:54:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.878 16:54:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.878 16:54:26 -- common/autotest_common.sh@1205 -- # local i=0 00:07:10.878 16:54:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:10.878 16:54:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.878 16:54:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:10.878 16:54:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.878 16:54:26 -- common/autotest_common.sh@1217 -- # return 0 00:07:10.878 16:54:26 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.878 16:54:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.878 16:54:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.878 16:54:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.878 16:54:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.878 16:54:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.878 16:54:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.878 16:54:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.878 16:54:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:10.878 16:54:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:10.878 16:54:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.878 16:54:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.878 16:54:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.878 16:54:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.878 16:54:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.878 16:54:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.878 [2024-04-18 16:54:26.495698] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.878 16:54:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.878 16:54:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:10.878 16:54:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.878 16:54:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.878 16:54:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.878 16:54:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:10.878 16:54:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.878 16:54:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.878 16:54:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.878 16:54:26 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.443 16:54:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:11.443 16:54:27 -- common/autotest_common.sh@1184 -- # local i=0 00:07:11.443 16:54:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:11.443 16:54:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:11.443 16:54:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:13.973 16:54:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:13.973 16:54:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:13.973 16:54:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:13.973 16:54:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:13.973 16:54:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:13.973 16:54:29 -- common/autotest_common.sh@1194 -- # return 0 00:07:13.973 16:54:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:13.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.973 16:54:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:13.973 16:54:29 -- common/autotest_common.sh@1205 -- # local i=0 00:07:13.973 16:54:29 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:13.973 16:54:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.973 16:54:29 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:13.973 16:54:29 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.973 16:54:29 -- common/autotest_common.sh@1217 -- # return 0 00:07:13.973 16:54:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@99 -- # seq 1 5 00:07:13.973 16:54:29 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.973 16:54:29 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 [2024-04-18 16:54:29.186436] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.973 16:54:29 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 [2024-04-18 16:54:29.234489] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.973 16:54:29 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 [2024-04-18 16:54:29.282619] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.973 16:54:29 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 [2024-04-18 16:54:29.330787] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.973 16:54:29 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.973 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.973 16:54:29 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.973 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.973 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.974 [2024-04-18 16:54:29.378932] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.974 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.974 16:54:29 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.974 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.974 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.974 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.974 16:54:29 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.974 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.974 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.974 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.974 16:54:29 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.974 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.974 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.974 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.974 16:54:29 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.974 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.974 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.974 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.974 16:54:29 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:13.974 16:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.974 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.974 16:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.974 16:54:29 -- target/rpc.sh@110 -- # stats='{ 00:07:13.974 "tick_rate": 2700000000, 00:07:13.974 "poll_groups": [ 00:07:13.974 { 00:07:13.974 "name": "nvmf_tgt_poll_group_0", 00:07:13.974 "admin_qpairs": 2, 00:07:13.974 "io_qpairs": 84, 00:07:13.974 "current_admin_qpairs": 0, 00:07:13.974 "current_io_qpairs": 0, 00:07:13.974 "pending_bdev_io": 0, 00:07:13.974 "completed_nvme_io": 184, 00:07:13.974 "transports": [ 00:07:13.974 { 00:07:13.974 "trtype": "TCP" 00:07:13.974 } 00:07:13.974 ] 00:07:13.974 }, 00:07:13.974 { 00:07:13.974 "name": "nvmf_tgt_poll_group_1", 00:07:13.974 "admin_qpairs": 2, 00:07:13.974 "io_qpairs": 84, 00:07:13.974 "current_admin_qpairs": 0, 00:07:13.974 "current_io_qpairs": 0, 00:07:13.974 "pending_bdev_io": 0, 00:07:13.974 "completed_nvme_io": 133, 00:07:13.974 "transports": [ 00:07:13.974 { 00:07:13.974 "trtype": "TCP" 00:07:13.974 } 00:07:13.974 ] 00:07:13.974 }, 00:07:13.974 { 00:07:13.974 "name": "nvmf_tgt_poll_group_2", 00:07:13.974 "admin_qpairs": 1, 00:07:13.974 "io_qpairs": 84, 00:07:13.974 "current_admin_qpairs": 0, 00:07:13.974 "current_io_qpairs": 0, 00:07:13.974 "pending_bdev_io": 0, 00:07:13.974 "completed_nvme_io": 185, 00:07:13.974 "transports": [ 00:07:13.974 { 00:07:13.974 "trtype": "TCP" 00:07:13.974 } 00:07:13.974 ] 00:07:13.974 }, 00:07:13.974 { 00:07:13.974 "name": "nvmf_tgt_poll_group_3", 00:07:13.974 "admin_qpairs": 2, 00:07:13.974 "io_qpairs": 84, 00:07:13.974 "current_admin_qpairs": 0, 00:07:13.974 "current_io_qpairs": 0, 00:07:13.974 "pending_bdev_io": 0, 00:07:13.974 "completed_nvme_io": 184, 00:07:13.974 "transports": [ 00:07:13.974 { 00:07:13.974 "trtype": "TCP" 00:07:13.974 } 00:07:13.974 ] 00:07:13.974 } 00:07:13.974 ] 00:07:13.974 }' 00:07:13.974 16:54:29 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:13.974 16:54:29 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:13.974 16:54:29 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:13.974 16:54:29 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:13.974 16:54:29 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:13.974 16:54:29 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:13.974 16:54:29 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:13.974 16:54:29 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:13.974 16:54:29 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:13.974 16:54:29 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:13.974 16:54:29 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:13.974 16:54:29 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:13.974 16:54:29 -- target/rpc.sh@123 -- # nvmftestfini 00:07:13.974 16:54:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:13.974 16:54:29 -- nvmf/common.sh@117 -- # sync 00:07:13.974 16:54:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:13.974 16:54:29 -- nvmf/common.sh@120 -- # set +e 00:07:13.974 16:54:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:13.974 16:54:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:13.974 rmmod nvme_tcp 00:07:13.974 rmmod nvme_fabrics 00:07:13.974 rmmod nvme_keyring 00:07:13.974 16:54:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:13.974 16:54:29 -- nvmf/common.sh@124 -- # set -e 00:07:13.974 16:54:29 -- nvmf/common.sh@125 -- # return 0 00:07:13.974 16:54:29 -- nvmf/common.sh@478 -- # '[' -n 1605309 ']' 00:07:13.974 16:54:29 -- nvmf/common.sh@479 -- # killprocess 1605309 00:07:13.974 16:54:29 -- common/autotest_common.sh@936 -- # '[' -z 1605309 ']' 00:07:13.974 16:54:29 -- common/autotest_common.sh@940 -- # kill -0 1605309 00:07:13.974 16:54:29 -- common/autotest_common.sh@941 -- # uname 00:07:13.974 16:54:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:13.974 16:54:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1605309 00:07:13.974 16:54:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:13.974 16:54:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:13.974 16:54:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1605309' 00:07:13.974 killing process with pid 1605309 00:07:13.974 16:54:29 -- common/autotest_common.sh@955 -- # kill 1605309 00:07:13.974 16:54:29 -- common/autotest_common.sh@960 -- # wait 1605309 00:07:14.281 16:54:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:14.281 16:54:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:14.281 16:54:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:14.281 16:54:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.281 16:54:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.281 16:54:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.281 16:54:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.281 16:54:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.820 16:54:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:16.820 00:07:16.820 real 0m24.819s 00:07:16.820 user 1m20.417s 00:07:16.820 sys 0m3.907s 00:07:16.820 16:54:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.820 16:54:31 -- common/autotest_common.sh@10 -- # set +x 00:07:16.820 ************************************ 00:07:16.820 END TEST nvmf_rpc 00:07:16.820 ************************************ 00:07:16.820 16:54:31 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:16.820 16:54:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:16.820 16:54:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.820 16:54:31 -- common/autotest_common.sh@10 -- # set +x 00:07:16.820 ************************************ 00:07:16.820 START TEST nvmf_invalid 00:07:16.820 ************************************ 00:07:16.820 16:54:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:16.820 * Looking for test storage... 00:07:16.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.820 16:54:32 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.820 16:54:32 -- nvmf/common.sh@7 -- # uname -s 00:07:16.820 16:54:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.820 16:54:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.820 16:54:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.820 16:54:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.820 16:54:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.820 16:54:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.820 16:54:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.820 16:54:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.820 16:54:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.820 16:54:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.820 16:54:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:16.820 16:54:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:16.820 16:54:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.820 16:54:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.820 16:54:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.820 16:54:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.820 16:54:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.820 16:54:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.820 16:54:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.820 16:54:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.820 16:54:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.820 16:54:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.820 16:54:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.820 16:54:32 -- paths/export.sh@5 -- # export PATH 00:07:16.820 16:54:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.820 16:54:32 -- nvmf/common.sh@47 -- # : 0 00:07:16.820 16:54:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:16.820 16:54:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:16.820 16:54:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.820 16:54:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.820 16:54:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.820 16:54:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:16.820 16:54:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:16.820 16:54:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:16.820 16:54:32 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:16.820 16:54:32 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.820 16:54:32 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:16.820 16:54:32 -- target/invalid.sh@14 -- # target=foobar 00:07:16.820 16:54:32 -- target/invalid.sh@16 -- # RANDOM=0 00:07:16.820 16:54:32 -- target/invalid.sh@34 -- # nvmftestinit 00:07:16.820 16:54:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:16.820 16:54:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.820 16:54:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:16.820 16:54:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:16.820 16:54:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:16.820 16:54:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.820 16:54:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.820 16:54:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.820 16:54:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:16.820 16:54:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:16.820 16:54:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:16.820 16:54:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.724 16:54:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:18.724 16:54:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:18.724 16:54:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:18.724 16:54:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:18.724 16:54:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:18.724 16:54:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:18.724 16:54:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:18.724 16:54:34 -- nvmf/common.sh@295 -- # net_devs=() 00:07:18.724 16:54:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:18.724 16:54:34 -- nvmf/common.sh@296 -- # e810=() 00:07:18.724 16:54:34 -- nvmf/common.sh@296 -- # local -ga e810 00:07:18.724 16:54:34 -- nvmf/common.sh@297 -- # x722=() 00:07:18.724 16:54:34 -- nvmf/common.sh@297 -- # local -ga x722 00:07:18.724 16:54:34 -- nvmf/common.sh@298 -- # mlx=() 00:07:18.724 16:54:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:18.724 16:54:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.724 16:54:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.724 16:54:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.724 16:54:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.724 16:54:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.724 16:54:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.724 16:54:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.724 16:54:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.724 16:54:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.724 16:54:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.724 16:54:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.724 16:54:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:18.724 16:54:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:18.724 16:54:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:18.724 16:54:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.724 16:54:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:18.724 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:18.724 16:54:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.724 16:54:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:18.724 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:18.724 16:54:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:18.724 16:54:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.724 16:54:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.724 16:54:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:18.724 16:54:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.724 16:54:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:18.724 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:18.724 16:54:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.724 16:54:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.724 16:54:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.724 16:54:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:18.724 16:54:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.724 16:54:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:18.724 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:18.724 16:54:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.724 16:54:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:18.724 16:54:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:18.724 16:54:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:18.724 16:54:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:18.724 16:54:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.724 16:54:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.724 16:54:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.724 16:54:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:18.724 16:54:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.724 16:54:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.724 16:54:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:18.724 16:54:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.724 16:54:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.724 16:54:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:18.724 16:54:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:18.724 16:54:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.724 16:54:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.724 16:54:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.724 16:54:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.724 16:54:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:18.724 16:54:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.724 16:54:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.724 16:54:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.724 16:54:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:18.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:07:18.724 00:07:18.724 --- 10.0.0.2 ping statistics --- 00:07:18.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.724 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:07:18.725 16:54:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:07:18.725 00:07:18.725 --- 10.0.0.1 ping statistics --- 00:07:18.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.725 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:07:18.725 16:54:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.725 16:54:34 -- nvmf/common.sh@411 -- # return 0 00:07:18.725 16:54:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:18.725 16:54:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.725 16:54:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:18.725 16:54:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:18.725 16:54:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.725 16:54:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:18.725 16:54:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:18.725 16:54:34 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:18.725 16:54:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:18.725 16:54:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:18.725 16:54:34 -- common/autotest_common.sh@10 -- # set +x 00:07:18.725 16:54:34 -- nvmf/common.sh@470 -- # nvmfpid=1609812 00:07:18.725 16:54:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:18.725 16:54:34 -- nvmf/common.sh@471 -- # waitforlisten 1609812 00:07:18.725 16:54:34 -- common/autotest_common.sh@817 -- # '[' -z 1609812 ']' 00:07:18.725 16:54:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.725 16:54:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:18.725 16:54:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.725 16:54:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:18.725 16:54:34 -- common/autotest_common.sh@10 -- # set +x 00:07:18.725 [2024-04-18 16:54:34.363978] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:07:18.725 [2024-04-18 16:54:34.364055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.725 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.983 [2024-04-18 16:54:34.434341] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.983 [2024-04-18 16:54:34.555769] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.983 [2024-04-18 16:54:34.555835] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.983 [2024-04-18 16:54:34.555861] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.983 [2024-04-18 16:54:34.555883] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.983 [2024-04-18 16:54:34.555903] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.983 [2024-04-18 16:54:34.555996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.983 [2024-04-18 16:54:34.556032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.983 [2024-04-18 16:54:34.556096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.983 [2024-04-18 16:54:34.556103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.916 16:54:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:19.916 16:54:35 -- common/autotest_common.sh@850 -- # return 0 00:07:19.916 16:54:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:19.916 16:54:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:19.916 16:54:35 -- common/autotest_common.sh@10 -- # set +x 00:07:19.916 16:54:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.917 16:54:35 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:19.917 16:54:35 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13483 00:07:19.917 [2024-04-18 16:54:35.571829] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:19.917 16:54:35 -- target/invalid.sh@40 -- # out='request: 00:07:19.917 { 00:07:19.917 "nqn": "nqn.2016-06.io.spdk:cnode13483", 00:07:19.917 "tgt_name": "foobar", 00:07:19.917 "method": "nvmf_create_subsystem", 00:07:19.917 "req_id": 1 00:07:19.917 } 00:07:19.917 Got JSON-RPC error response 00:07:19.917 response: 00:07:19.917 { 00:07:19.917 "code": -32603, 00:07:19.917 "message": "Unable to find target foobar" 00:07:19.917 }' 00:07:19.917 16:54:35 -- target/invalid.sh@41 -- # [[ request: 00:07:19.917 { 00:07:19.917 "nqn": "nqn.2016-06.io.spdk:cnode13483", 00:07:19.917 "tgt_name": "foobar", 00:07:19.917 "method": "nvmf_create_subsystem", 00:07:19.917 "req_id": 1 00:07:19.917 } 00:07:19.917 Got JSON-RPC error response 00:07:19.917 response: 00:07:19.917 { 00:07:19.917 "code": -32603, 00:07:19.917 "message": "Unable to find target foobar" 00:07:19.917 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:19.917 16:54:35 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:19.917 16:54:35 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12024 00:07:20.174 [2024-04-18 16:54:35.816629] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12024: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:20.174 16:54:35 -- target/invalid.sh@45 -- # out='request: 00:07:20.174 { 00:07:20.174 "nqn": "nqn.2016-06.io.spdk:cnode12024", 00:07:20.174 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:20.174 "method": "nvmf_create_subsystem", 00:07:20.174 "req_id": 1 00:07:20.174 } 00:07:20.174 Got JSON-RPC error response 00:07:20.174 response: 00:07:20.174 { 00:07:20.174 "code": -32602, 00:07:20.174 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:20.174 }' 00:07:20.174 16:54:35 -- target/invalid.sh@46 -- # [[ request: 00:07:20.174 { 00:07:20.174 "nqn": "nqn.2016-06.io.spdk:cnode12024", 00:07:20.174 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:20.174 "method": "nvmf_create_subsystem", 00:07:20.174 "req_id": 1 00:07:20.174 } 00:07:20.174 Got JSON-RPC error response 00:07:20.174 response: 00:07:20.174 { 00:07:20.174 "code": -32602, 00:07:20.174 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:20.174 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:20.174 16:54:35 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:20.174 16:54:35 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4681 00:07:20.432 [2024-04-18 16:54:36.061402] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4681: invalid model number 'SPDK_Controller' 00:07:20.432 16:54:36 -- target/invalid.sh@50 -- # out='request: 00:07:20.432 { 00:07:20.432 "nqn": "nqn.2016-06.io.spdk:cnode4681", 00:07:20.432 "model_number": "SPDK_Controller\u001f", 00:07:20.432 "method": "nvmf_create_subsystem", 00:07:20.432 "req_id": 1 00:07:20.432 } 00:07:20.432 Got JSON-RPC error response 00:07:20.432 response: 00:07:20.432 { 00:07:20.432 "code": -32602, 00:07:20.432 "message": "Invalid MN SPDK_Controller\u001f" 00:07:20.432 }' 00:07:20.432 16:54:36 -- target/invalid.sh@51 -- # [[ request: 00:07:20.432 { 00:07:20.432 "nqn": "nqn.2016-06.io.spdk:cnode4681", 00:07:20.432 "model_number": "SPDK_Controller\u001f", 00:07:20.432 "method": "nvmf_create_subsystem", 00:07:20.432 "req_id": 1 00:07:20.432 } 00:07:20.432 Got JSON-RPC error response 00:07:20.432 response: 00:07:20.432 { 00:07:20.432 "code": -32602, 00:07:20.432 "message": "Invalid MN SPDK_Controller\u001f" 00:07:20.432 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:20.432 16:54:36 -- target/invalid.sh@54 -- # gen_random_s 21 00:07:20.432 16:54:36 -- target/invalid.sh@19 -- # local length=21 ll 00:07:20.432 16:54:36 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:20.432 16:54:36 -- target/invalid.sh@21 -- # local chars 00:07:20.432 16:54:36 -- target/invalid.sh@22 -- # local string 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 87 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+=W 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 42 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+='*' 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 48 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+=0 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 115 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+=s 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 74 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+=J 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 81 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+=Q 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 120 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+=x 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 99 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+=c 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 82 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+=R 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 111 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+=o 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 112 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+=p 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 44 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+=, 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 126 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+='~' 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # printf %x 51 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:20.432 16:54:36 -- target/invalid.sh@25 -- # string+=3 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.432 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # printf %x 95 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # string+=_ 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # printf %x 40 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # string+='(' 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # printf %x 37 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # string+=% 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # printf %x 116 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # string+=t 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # printf %x 86 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # string+=V 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # printf %x 109 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # string+=m 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # printf %x 72 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:20.690 16:54:36 -- target/invalid.sh@25 -- # string+=H 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.690 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.690 16:54:36 -- target/invalid.sh@28 -- # [[ W == \- ]] 00:07:20.690 16:54:36 -- target/invalid.sh@31 -- # echo 'W*0sJQxcRop,~3_(%tVmH' 00:07:20.690 16:54:36 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'W*0sJQxcRop,~3_(%tVmH' nqn.2016-06.io.spdk:cnode1775 00:07:20.690 [2024-04-18 16:54:36.386450] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1775: invalid serial number 'W*0sJQxcRop,~3_(%tVmH' 00:07:20.949 16:54:36 -- target/invalid.sh@54 -- # out='request: 00:07:20.949 { 00:07:20.949 "nqn": "nqn.2016-06.io.spdk:cnode1775", 00:07:20.949 "serial_number": "W*0sJQxcRop,~3_(%tVmH", 00:07:20.949 "method": "nvmf_create_subsystem", 00:07:20.949 "req_id": 1 00:07:20.950 } 00:07:20.950 Got JSON-RPC error response 00:07:20.950 response: 00:07:20.950 { 00:07:20.950 "code": -32602, 00:07:20.950 "message": "Invalid SN W*0sJQxcRop,~3_(%tVmH" 00:07:20.950 }' 00:07:20.950 16:54:36 -- target/invalid.sh@55 -- # [[ request: 00:07:20.950 { 00:07:20.950 "nqn": "nqn.2016-06.io.spdk:cnode1775", 00:07:20.950 "serial_number": "W*0sJQxcRop,~3_(%tVmH", 00:07:20.950 "method": "nvmf_create_subsystem", 00:07:20.950 "req_id": 1 00:07:20.950 } 00:07:20.950 Got JSON-RPC error response 00:07:20.950 response: 00:07:20.950 { 00:07:20.950 "code": -32602, 00:07:20.950 "message": "Invalid SN W*0sJQxcRop,~3_(%tVmH" 00:07:20.950 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:20.950 16:54:36 -- target/invalid.sh@58 -- # gen_random_s 41 00:07:20.950 16:54:36 -- target/invalid.sh@19 -- # local length=41 ll 00:07:20.950 16:54:36 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:20.950 16:54:36 -- target/invalid.sh@21 -- # local chars 00:07:20.950 16:54:36 -- target/invalid.sh@22 -- # local string 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 48 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=0 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 39 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=\' 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 126 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+='~' 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 124 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+='|' 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 117 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=u 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 72 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=H 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 66 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=B 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 118 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=v 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 57 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=9 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 122 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=z 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 96 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+='`' 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 63 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+='?' 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 53 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=5 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 53 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=5 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 42 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+='*' 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 64 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=@ 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 112 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=p 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 97 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=a 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 120 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=x 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 95 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=_ 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 114 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=r 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 49 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=1 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 103 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=g 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 89 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=Y 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 48 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=0 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 35 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+='#' 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 96 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+='`' 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 126 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+='~' 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 52 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=4 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 114 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=r 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 88 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=X 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 82 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=R 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 112 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=p 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 65 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=A 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # printf %x 44 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:20.950 16:54:36 -- target/invalid.sh@25 -- # string+=, 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.950 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # printf %x 97 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # string+=a 00:07:20.951 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.951 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # printf %x 45 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # string+=- 00:07:20.951 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.951 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # printf %x 119 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # string+=w 00:07:20.951 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.951 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # printf %x 101 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # string+=e 00:07:20.951 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.951 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # printf %x 112 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # string+=p 00:07:20.951 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.951 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # printf %x 75 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:20.951 16:54:36 -- target/invalid.sh@25 -- # string+=K 00:07:20.951 16:54:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.951 16:54:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.951 16:54:36 -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:07:20.951 16:54:36 -- target/invalid.sh@31 -- # echo '0'\''~|uHBv9z`?55*@pax_r1gY0#`~4rXRpA,a-wepK' 00:07:20.951 16:54:36 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '0'\''~|uHBv9z`?55*@pax_r1gY0#`~4rXRpA,a-wepK' nqn.2016-06.io.spdk:cnode23127 00:07:21.208 [2024-04-18 16:54:36.771714] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23127: invalid model number '0'~|uHBv9z`?55*@pax_r1gY0#`~4rXRpA,a-wepK' 00:07:21.208 16:54:36 -- target/invalid.sh@58 -- # out='request: 00:07:21.208 { 00:07:21.208 "nqn": "nqn.2016-06.io.spdk:cnode23127", 00:07:21.208 "model_number": "0'\''~|uHBv9z`?55*@pax_r1gY0#`~4rXRpA,a-wepK", 00:07:21.208 "method": "nvmf_create_subsystem", 00:07:21.208 "req_id": 1 00:07:21.208 } 00:07:21.208 Got JSON-RPC error response 00:07:21.208 response: 00:07:21.208 { 00:07:21.208 "code": -32602, 00:07:21.208 "message": "Invalid MN 0'\''~|uHBv9z`?55*@pax_r1gY0#`~4rXRpA,a-wepK" 00:07:21.208 }' 00:07:21.208 16:54:36 -- target/invalid.sh@59 -- # [[ request: 00:07:21.208 { 00:07:21.208 "nqn": "nqn.2016-06.io.spdk:cnode23127", 00:07:21.208 "model_number": "0'~|uHBv9z`?55*@pax_r1gY0#`~4rXRpA,a-wepK", 00:07:21.208 "method": "nvmf_create_subsystem", 00:07:21.209 "req_id": 1 00:07:21.209 } 00:07:21.209 Got JSON-RPC error response 00:07:21.209 response: 00:07:21.209 { 00:07:21.209 "code": -32602, 00:07:21.209 "message": "Invalid MN 0'~|uHBv9z`?55*@pax_r1gY0#`~4rXRpA,a-wepK" 00:07:21.209 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:21.209 16:54:36 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:21.466 [2024-04-18 16:54:37.032642] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.467 16:54:37 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:21.724 16:54:37 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:21.724 16:54:37 -- target/invalid.sh@67 -- # echo '' 00:07:21.724 16:54:37 -- target/invalid.sh@67 -- # head -n 1 00:07:21.724 16:54:37 -- target/invalid.sh@67 -- # IP= 00:07:21.724 16:54:37 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:21.982 [2024-04-18 16:54:37.518304] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:21.982 16:54:37 -- target/invalid.sh@69 -- # out='request: 00:07:21.982 { 00:07:21.982 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:21.982 "listen_address": { 00:07:21.982 "trtype": "tcp", 00:07:21.982 "traddr": "", 00:07:21.982 "trsvcid": "4421" 00:07:21.982 }, 00:07:21.982 "method": "nvmf_subsystem_remove_listener", 00:07:21.982 "req_id": 1 00:07:21.982 } 00:07:21.982 Got JSON-RPC error response 00:07:21.982 response: 00:07:21.982 { 00:07:21.982 "code": -32602, 00:07:21.982 "message": "Invalid parameters" 00:07:21.982 }' 00:07:21.982 16:54:37 -- target/invalid.sh@70 -- # [[ request: 00:07:21.982 { 00:07:21.982 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:21.982 "listen_address": { 00:07:21.982 "trtype": "tcp", 00:07:21.982 "traddr": "", 00:07:21.982 "trsvcid": "4421" 00:07:21.982 }, 00:07:21.982 "method": "nvmf_subsystem_remove_listener", 00:07:21.982 "req_id": 1 00:07:21.982 } 00:07:21.982 Got JSON-RPC error response 00:07:21.982 response: 00:07:21.982 { 00:07:21.982 "code": -32602, 00:07:21.982 "message": "Invalid parameters" 00:07:21.982 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:21.982 16:54:37 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10432 -i 0 00:07:22.240 [2024-04-18 16:54:37.759057] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10432: invalid cntlid range [0-65519] 00:07:22.240 16:54:37 -- target/invalid.sh@73 -- # out='request: 00:07:22.240 { 00:07:22.240 "nqn": "nqn.2016-06.io.spdk:cnode10432", 00:07:22.240 "min_cntlid": 0, 00:07:22.240 "method": "nvmf_create_subsystem", 00:07:22.240 "req_id": 1 00:07:22.240 } 00:07:22.240 Got JSON-RPC error response 00:07:22.240 response: 00:07:22.240 { 00:07:22.240 "code": -32602, 00:07:22.240 "message": "Invalid cntlid range [0-65519]" 00:07:22.240 }' 00:07:22.240 16:54:37 -- target/invalid.sh@74 -- # [[ request: 00:07:22.240 { 00:07:22.240 "nqn": "nqn.2016-06.io.spdk:cnode10432", 00:07:22.240 "min_cntlid": 0, 00:07:22.240 "method": "nvmf_create_subsystem", 00:07:22.240 "req_id": 1 00:07:22.240 } 00:07:22.240 Got JSON-RPC error response 00:07:22.240 response: 00:07:22.240 { 00:07:22.240 "code": -32602, 00:07:22.240 "message": "Invalid cntlid range [0-65519]" 00:07:22.240 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:22.240 16:54:37 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20862 -i 65520 00:07:22.535 [2024-04-18 16:54:37.995868] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20862: invalid cntlid range [65520-65519] 00:07:22.535 16:54:38 -- target/invalid.sh@75 -- # out='request: 00:07:22.535 { 00:07:22.535 "nqn": "nqn.2016-06.io.spdk:cnode20862", 00:07:22.535 "min_cntlid": 65520, 00:07:22.535 "method": "nvmf_create_subsystem", 00:07:22.535 "req_id": 1 00:07:22.535 } 00:07:22.535 Got JSON-RPC error response 00:07:22.535 response: 00:07:22.535 { 00:07:22.535 "code": -32602, 00:07:22.535 "message": "Invalid cntlid range [65520-65519]" 00:07:22.535 }' 00:07:22.535 16:54:38 -- target/invalid.sh@76 -- # [[ request: 00:07:22.535 { 00:07:22.535 "nqn": "nqn.2016-06.io.spdk:cnode20862", 00:07:22.535 "min_cntlid": 65520, 00:07:22.535 "method": "nvmf_create_subsystem", 00:07:22.535 "req_id": 1 00:07:22.535 } 00:07:22.535 Got JSON-RPC error response 00:07:22.535 response: 00:07:22.535 { 00:07:22.535 "code": -32602, 00:07:22.535 "message": "Invalid cntlid range [65520-65519]" 00:07:22.535 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:22.535 16:54:38 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6695 -I 0 00:07:22.794 [2024-04-18 16:54:38.244727] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6695: invalid cntlid range [1-0] 00:07:22.794 16:54:38 -- target/invalid.sh@77 -- # out='request: 00:07:22.794 { 00:07:22.794 "nqn": "nqn.2016-06.io.spdk:cnode6695", 00:07:22.794 "max_cntlid": 0, 00:07:22.794 "method": "nvmf_create_subsystem", 00:07:22.794 "req_id": 1 00:07:22.794 } 00:07:22.794 Got JSON-RPC error response 00:07:22.794 response: 00:07:22.794 { 00:07:22.794 "code": -32602, 00:07:22.794 "message": "Invalid cntlid range [1-0]" 00:07:22.794 }' 00:07:22.794 16:54:38 -- target/invalid.sh@78 -- # [[ request: 00:07:22.794 { 00:07:22.794 "nqn": "nqn.2016-06.io.spdk:cnode6695", 00:07:22.794 "max_cntlid": 0, 00:07:22.794 "method": "nvmf_create_subsystem", 00:07:22.794 "req_id": 1 00:07:22.794 } 00:07:22.794 Got JSON-RPC error response 00:07:22.794 response: 00:07:22.794 { 00:07:22.794 "code": -32602, 00:07:22.794 "message": "Invalid cntlid range [1-0]" 00:07:22.794 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:22.794 16:54:38 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12796 -I 65520 00:07:22.794 [2024-04-18 16:54:38.481531] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12796: invalid cntlid range [1-65520] 00:07:23.052 16:54:38 -- target/invalid.sh@79 -- # out='request: 00:07:23.052 { 00:07:23.052 "nqn": "nqn.2016-06.io.spdk:cnode12796", 00:07:23.052 "max_cntlid": 65520, 00:07:23.052 "method": "nvmf_create_subsystem", 00:07:23.052 "req_id": 1 00:07:23.052 } 00:07:23.052 Got JSON-RPC error response 00:07:23.052 response: 00:07:23.052 { 00:07:23.052 "code": -32602, 00:07:23.052 "message": "Invalid cntlid range [1-65520]" 00:07:23.052 }' 00:07:23.052 16:54:38 -- target/invalid.sh@80 -- # [[ request: 00:07:23.052 { 00:07:23.052 "nqn": "nqn.2016-06.io.spdk:cnode12796", 00:07:23.052 "max_cntlid": 65520, 00:07:23.052 "method": "nvmf_create_subsystem", 00:07:23.052 "req_id": 1 00:07:23.052 } 00:07:23.052 Got JSON-RPC error response 00:07:23.052 response: 00:07:23.052 { 00:07:23.052 "code": -32602, 00:07:23.052 "message": "Invalid cntlid range [1-65520]" 00:07:23.052 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:23.052 16:54:38 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31305 -i 6 -I 5 00:07:23.052 [2024-04-18 16:54:38.734354] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31305: invalid cntlid range [6-5] 00:07:23.052 16:54:38 -- target/invalid.sh@83 -- # out='request: 00:07:23.052 { 00:07:23.052 "nqn": "nqn.2016-06.io.spdk:cnode31305", 00:07:23.052 "min_cntlid": 6, 00:07:23.052 "max_cntlid": 5, 00:07:23.052 "method": "nvmf_create_subsystem", 00:07:23.052 "req_id": 1 00:07:23.052 } 00:07:23.052 Got JSON-RPC error response 00:07:23.052 response: 00:07:23.052 { 00:07:23.052 "code": -32602, 00:07:23.052 "message": "Invalid cntlid range [6-5]" 00:07:23.052 }' 00:07:23.052 16:54:38 -- target/invalid.sh@84 -- # [[ request: 00:07:23.052 { 00:07:23.052 "nqn": "nqn.2016-06.io.spdk:cnode31305", 00:07:23.052 "min_cntlid": 6, 00:07:23.052 "max_cntlid": 5, 00:07:23.052 "method": "nvmf_create_subsystem", 00:07:23.052 "req_id": 1 00:07:23.052 } 00:07:23.052 Got JSON-RPC error response 00:07:23.052 response: 00:07:23.052 { 00:07:23.052 "code": -32602, 00:07:23.052 "message": "Invalid cntlid range [6-5]" 00:07:23.052 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:23.052 16:54:38 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:23.310 16:54:38 -- target/invalid.sh@87 -- # out='request: 00:07:23.310 { 00:07:23.310 "name": "foobar", 00:07:23.310 "method": "nvmf_delete_target", 00:07:23.310 "req_id": 1 00:07:23.310 } 00:07:23.310 Got JSON-RPC error response 00:07:23.310 response: 00:07:23.310 { 00:07:23.310 "code": -32602, 00:07:23.310 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:23.310 }' 00:07:23.310 16:54:38 -- target/invalid.sh@88 -- # [[ request: 00:07:23.310 { 00:07:23.310 "name": "foobar", 00:07:23.310 "method": "nvmf_delete_target", 00:07:23.310 "req_id": 1 00:07:23.310 } 00:07:23.310 Got JSON-RPC error response 00:07:23.310 response: 00:07:23.310 { 00:07:23.310 "code": -32602, 00:07:23.310 "message": "The specified target doesn't exist, cannot delete it." 00:07:23.310 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:23.310 16:54:38 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:23.310 16:54:38 -- target/invalid.sh@91 -- # nvmftestfini 00:07:23.310 16:54:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:23.310 16:54:38 -- nvmf/common.sh@117 -- # sync 00:07:23.310 16:54:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:23.310 16:54:38 -- nvmf/common.sh@120 -- # set +e 00:07:23.310 16:54:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:23.310 16:54:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:23.310 rmmod nvme_tcp 00:07:23.310 rmmod nvme_fabrics 00:07:23.310 rmmod nvme_keyring 00:07:23.310 16:54:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:23.310 16:54:38 -- nvmf/common.sh@124 -- # set -e 00:07:23.310 16:54:38 -- nvmf/common.sh@125 -- # return 0 00:07:23.310 16:54:38 -- nvmf/common.sh@478 -- # '[' -n 1609812 ']' 00:07:23.310 16:54:38 -- nvmf/common.sh@479 -- # killprocess 1609812 00:07:23.310 16:54:38 -- common/autotest_common.sh@936 -- # '[' -z 1609812 ']' 00:07:23.310 16:54:38 -- common/autotest_common.sh@940 -- # kill -0 1609812 00:07:23.310 16:54:38 -- common/autotest_common.sh@941 -- # uname 00:07:23.310 16:54:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:23.310 16:54:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1609812 00:07:23.310 16:54:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:23.310 16:54:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:23.310 16:54:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1609812' 00:07:23.310 killing process with pid 1609812 00:07:23.310 16:54:38 -- common/autotest_common.sh@955 -- # kill 1609812 00:07:23.310 16:54:38 -- common/autotest_common.sh@960 -- # wait 1609812 00:07:23.569 16:54:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:23.569 16:54:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:23.569 16:54:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:23.569 16:54:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:23.569 16:54:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:23.569 16:54:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.569 16:54:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.569 16:54:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.109 16:54:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:26.109 00:07:26.109 real 0m9.244s 00:07:26.109 user 0m22.344s 00:07:26.109 sys 0m2.417s 00:07:26.109 16:54:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:26.109 16:54:41 -- common/autotest_common.sh@10 -- # set +x 00:07:26.109 ************************************ 00:07:26.109 END TEST nvmf_invalid 00:07:26.109 ************************************ 00:07:26.109 16:54:41 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:26.109 16:54:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:26.109 16:54:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.109 16:54:41 -- common/autotest_common.sh@10 -- # set +x 00:07:26.109 ************************************ 00:07:26.109 START TEST nvmf_abort 00:07:26.109 ************************************ 00:07:26.109 16:54:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:26.109 * Looking for test storage... 00:07:26.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.109 16:54:41 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.109 16:54:41 -- nvmf/common.sh@7 -- # uname -s 00:07:26.109 16:54:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.109 16:54:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.109 16:54:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.110 16:54:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.110 16:54:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.110 16:54:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.110 16:54:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.110 16:54:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.110 16:54:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.110 16:54:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.110 16:54:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:26.110 16:54:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:26.110 16:54:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.110 16:54:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.110 16:54:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.110 16:54:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.110 16:54:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.110 16:54:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.110 16:54:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.110 16:54:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.110 16:54:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.110 16:54:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.110 16:54:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.110 16:54:41 -- paths/export.sh@5 -- # export PATH 00:07:26.110 16:54:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.110 16:54:41 -- nvmf/common.sh@47 -- # : 0 00:07:26.110 16:54:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.110 16:54:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.110 16:54:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.110 16:54:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.110 16:54:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.110 16:54:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.110 16:54:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.110 16:54:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.110 16:54:41 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.110 16:54:41 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:26.110 16:54:41 -- target/abort.sh@14 -- # nvmftestinit 00:07:26.110 16:54:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:26.110 16:54:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.110 16:54:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:26.110 16:54:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:26.110 16:54:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:26.110 16:54:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.110 16:54:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.110 16:54:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.110 16:54:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:26.110 16:54:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:26.110 16:54:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:26.110 16:54:41 -- common/autotest_common.sh@10 -- # set +x 00:07:28.014 16:54:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:28.014 16:54:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.014 16:54:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.014 16:54:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.014 16:54:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.014 16:54:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.014 16:54:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.014 16:54:43 -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.014 16:54:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.014 16:54:43 -- nvmf/common.sh@296 -- # e810=() 00:07:28.014 16:54:43 -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.014 16:54:43 -- nvmf/common.sh@297 -- # x722=() 00:07:28.014 16:54:43 -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.014 16:54:43 -- nvmf/common.sh@298 -- # mlx=() 00:07:28.014 16:54:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.014 16:54:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.014 16:54:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.014 16:54:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.014 16:54:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.014 16:54:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.014 16:54:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.014 16:54:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.014 16:54:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.014 16:54:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.014 16:54:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.014 16:54:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.014 16:54:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.014 16:54:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.014 16:54:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.014 16:54:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.014 16:54:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.014 16:54:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.014 16:54:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.014 16:54:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:28.014 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:28.015 16:54:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.015 16:54:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:28.015 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:28.015 16:54:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.015 16:54:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.015 16:54:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.015 16:54:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:28.015 16:54:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.015 16:54:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:28.015 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:28.015 16:54:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.015 16:54:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.015 16:54:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.015 16:54:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:28.015 16:54:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.015 16:54:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:28.015 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:28.015 16:54:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.015 16:54:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:28.015 16:54:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:28.015 16:54:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:28.015 16:54:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.015 16:54:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.015 16:54:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.015 16:54:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.015 16:54:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.015 16:54:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.015 16:54:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.015 16:54:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.015 16:54:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.015 16:54:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.015 16:54:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.015 16:54:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.015 16:54:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.015 16:54:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.015 16:54:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.015 16:54:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.015 16:54:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.015 16:54:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.015 16:54:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.015 16:54:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:07:28.015 00:07:28.015 --- 10.0.0.2 ping statistics --- 00:07:28.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.015 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:07:28.015 16:54:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:07:28.015 00:07:28.015 --- 10.0.0.1 ping statistics --- 00:07:28.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.015 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:07:28.015 16:54:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.015 16:54:43 -- nvmf/common.sh@411 -- # return 0 00:07:28.015 16:54:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:28.015 16:54:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.015 16:54:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:28.015 16:54:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.015 16:54:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:28.015 16:54:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:28.015 16:54:43 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:28.015 16:54:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:28.015 16:54:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:28.015 16:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:28.015 16:54:43 -- nvmf/common.sh@470 -- # nvmfpid=1612460 00:07:28.015 16:54:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:28.015 16:54:43 -- nvmf/common.sh@471 -- # waitforlisten 1612460 00:07:28.015 16:54:43 -- common/autotest_common.sh@817 -- # '[' -z 1612460 ']' 00:07:28.015 16:54:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.015 16:54:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:28.015 16:54:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.015 16:54:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:28.015 16:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:28.015 [2024-04-18 16:54:43.634460] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:07:28.015 [2024-04-18 16:54:43.634550] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.015 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.015 [2024-04-18 16:54:43.700322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.274 [2024-04-18 16:54:43.808998] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.274 [2024-04-18 16:54:43.809070] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.274 [2024-04-18 16:54:43.809084] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.274 [2024-04-18 16:54:43.809096] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.274 [2024-04-18 16:54:43.809107] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.274 [2024-04-18 16:54:43.809212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.274 [2024-04-18 16:54:43.809242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.274 [2024-04-18 16:54:43.809244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.274 16:54:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:28.274 16:54:43 -- common/autotest_common.sh@850 -- # return 0 00:07:28.274 16:54:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:28.274 16:54:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:28.274 16:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:28.274 16:54:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.274 16:54:43 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:28.274 16:54:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.274 16:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:28.274 [2024-04-18 16:54:43.956772] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.274 16:54:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.274 16:54:43 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:28.274 16:54:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.274 16:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:28.534 Malloc0 00:07:28.534 16:54:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.534 16:54:44 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:28.534 16:54:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.534 16:54:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.534 Delay0 00:07:28.534 16:54:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.534 16:54:44 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:28.534 16:54:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.534 16:54:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.534 16:54:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.534 16:54:44 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:28.534 16:54:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.534 16:54:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.534 16:54:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.534 16:54:44 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:28.534 16:54:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.534 16:54:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.534 [2024-04-18 16:54:44.036277] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.534 16:54:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.534 16:54:44 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:28.534 16:54:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.535 16:54:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.535 16:54:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.535 16:54:44 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:28.535 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.535 [2024-04-18 16:54:44.141412] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:31.074 Initializing NVMe Controllers 00:07:31.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:31.074 controller IO queue size 128 less than required 00:07:31.074 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:31.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:31.074 Initialization complete. Launching workers. 00:07:31.074 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34650 00:07:31.074 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34715, failed to submit 62 00:07:31.074 success 34654, unsuccess 61, failed 0 00:07:31.074 16:54:46 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:31.074 16:54:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:31.074 16:54:46 -- common/autotest_common.sh@10 -- # set +x 00:07:31.074 16:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:31.074 16:54:46 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:31.074 16:54:46 -- target/abort.sh@38 -- # nvmftestfini 00:07:31.074 16:54:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:31.074 16:54:46 -- nvmf/common.sh@117 -- # sync 00:07:31.074 16:54:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.074 16:54:46 -- nvmf/common.sh@120 -- # set +e 00:07:31.074 16:54:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.074 16:54:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.074 rmmod nvme_tcp 00:07:31.074 rmmod nvme_fabrics 00:07:31.074 rmmod nvme_keyring 00:07:31.074 16:54:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.074 16:54:46 -- nvmf/common.sh@124 -- # set -e 00:07:31.074 16:54:46 -- nvmf/common.sh@125 -- # return 0 00:07:31.074 16:54:46 -- nvmf/common.sh@478 -- # '[' -n 1612460 ']' 00:07:31.075 16:54:46 -- nvmf/common.sh@479 -- # killprocess 1612460 00:07:31.075 16:54:46 -- common/autotest_common.sh@936 -- # '[' -z 1612460 ']' 00:07:31.075 16:54:46 -- common/autotest_common.sh@940 -- # kill -0 1612460 00:07:31.075 16:54:46 -- common/autotest_common.sh@941 -- # uname 00:07:31.075 16:54:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:31.075 16:54:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1612460 00:07:31.075 16:54:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:31.075 16:54:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:31.075 16:54:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1612460' 00:07:31.075 killing process with pid 1612460 00:07:31.075 16:54:46 -- common/autotest_common.sh@955 -- # kill 1612460 00:07:31.075 16:54:46 -- common/autotest_common.sh@960 -- # wait 1612460 00:07:31.075 16:54:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:31.075 16:54:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:31.075 16:54:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:31.075 16:54:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.075 16:54:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.075 16:54:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.075 16:54:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.075 16:54:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.627 16:54:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:33.627 00:07:33.627 real 0m7.334s 00:07:33.627 user 0m10.816s 00:07:33.627 sys 0m2.474s 00:07:33.627 16:54:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.627 16:54:48 -- common/autotest_common.sh@10 -- # set +x 00:07:33.627 ************************************ 00:07:33.627 END TEST nvmf_abort 00:07:33.627 ************************************ 00:07:33.627 16:54:48 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:33.627 16:54:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:33.627 16:54:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.627 16:54:48 -- common/autotest_common.sh@10 -- # set +x 00:07:33.627 ************************************ 00:07:33.627 START TEST nvmf_ns_hotplug_stress 00:07:33.627 ************************************ 00:07:33.627 16:54:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:33.627 * Looking for test storage... 00:07:33.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.627 16:54:48 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.627 16:54:48 -- nvmf/common.sh@7 -- # uname -s 00:07:33.627 16:54:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.627 16:54:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.627 16:54:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.627 16:54:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.627 16:54:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.627 16:54:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.627 16:54:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.627 16:54:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.627 16:54:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.627 16:54:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.627 16:54:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:33.627 16:54:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:33.627 16:54:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.627 16:54:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.627 16:54:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.627 16:54:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.627 16:54:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.627 16:54:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.627 16:54:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.627 16:54:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.627 16:54:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.627 16:54:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.627 16:54:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.627 16:54:48 -- paths/export.sh@5 -- # export PATH 00:07:33.627 16:54:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.627 16:54:48 -- nvmf/common.sh@47 -- # : 0 00:07:33.627 16:54:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.627 16:54:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.627 16:54:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.627 16:54:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.627 16:54:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.627 16:54:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.627 16:54:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.627 16:54:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.627 16:54:48 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.627 16:54:48 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:07:33.627 16:54:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:33.627 16:54:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.627 16:54:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:33.627 16:54:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:33.627 16:54:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:33.627 16:54:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.627 16:54:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.627 16:54:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.627 16:54:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:33.627 16:54:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:33.627 16:54:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.627 16:54:48 -- common/autotest_common.sh@10 -- # set +x 00:07:35.547 16:54:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:35.547 16:54:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.547 16:54:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.547 16:54:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.547 16:54:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.547 16:54:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.547 16:54:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.547 16:54:50 -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.547 16:54:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.547 16:54:50 -- nvmf/common.sh@296 -- # e810=() 00:07:35.547 16:54:50 -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.547 16:54:50 -- nvmf/common.sh@297 -- # x722=() 00:07:35.547 16:54:50 -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.547 16:54:50 -- nvmf/common.sh@298 -- # mlx=() 00:07:35.547 16:54:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.547 16:54:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.548 16:54:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.548 16:54:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.548 16:54:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.548 16:54:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.548 16:54:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.548 16:54:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.548 16:54:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.548 16:54:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.548 16:54:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.548 16:54:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.548 16:54:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.548 16:54:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.548 16:54:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.548 16:54:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.548 16:54:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:35.548 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:35.548 16:54:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.548 16:54:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:35.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:35.548 16:54:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.548 16:54:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.548 16:54:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.548 16:54:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:35.548 16:54:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.548 16:54:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:35.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:35.548 16:54:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.548 16:54:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.548 16:54:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.548 16:54:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:35.548 16:54:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.548 16:54:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:35.548 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:35.548 16:54:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.548 16:54:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:35.548 16:54:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:35.548 16:54:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:35.548 16:54:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:35.548 16:54:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.548 16:54:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.548 16:54:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.548 16:54:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.548 16:54:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.548 16:54:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.548 16:54:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.548 16:54:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.548 16:54:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.548 16:54:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.548 16:54:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.548 16:54:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.548 16:54:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.548 16:54:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.548 16:54:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.548 16:54:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.548 16:54:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.548 16:54:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.548 16:54:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.548 16:54:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:07:35.548 00:07:35.548 --- 10.0.0.2 ping statistics --- 00:07:35.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.548 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:07:35.548 16:54:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:07:35.548 00:07:35.548 --- 10.0.0.1 ping statistics --- 00:07:35.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.548 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:07:35.548 16:54:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.548 16:54:51 -- nvmf/common.sh@411 -- # return 0 00:07:35.548 16:54:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:35.548 16:54:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.548 16:54:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:35.548 16:54:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:35.548 16:54:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.548 16:54:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:35.548 16:54:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:35.548 16:54:51 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:07:35.548 16:54:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:35.548 16:54:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:35.548 16:54:51 -- common/autotest_common.sh@10 -- # set +x 00:07:35.548 16:54:51 -- nvmf/common.sh@470 -- # nvmfpid=1614808 00:07:35.548 16:54:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:35.548 16:54:51 -- nvmf/common.sh@471 -- # waitforlisten 1614808 00:07:35.548 16:54:51 -- common/autotest_common.sh@817 -- # '[' -z 1614808 ']' 00:07:35.548 16:54:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.548 16:54:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:35.548 16:54:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.548 16:54:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:35.548 16:54:51 -- common/autotest_common.sh@10 -- # set +x 00:07:35.548 [2024-04-18 16:54:51.069241] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:07:35.548 [2024-04-18 16:54:51.069322] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.548 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.548 [2024-04-18 16:54:51.131586] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.548 [2024-04-18 16:54:51.241151] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.548 [2024-04-18 16:54:51.241219] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.548 [2024-04-18 16:54:51.241236] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.548 [2024-04-18 16:54:51.241249] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.548 [2024-04-18 16:54:51.241261] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.548 [2024-04-18 16:54:51.241344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.548 [2024-04-18 16:54:51.241402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.548 [2024-04-18 16:54:51.241406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.842 16:54:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:35.842 16:54:51 -- common/autotest_common.sh@850 -- # return 0 00:07:35.842 16:54:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:35.842 16:54:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:35.842 16:54:51 -- common/autotest_common.sh@10 -- # set +x 00:07:35.842 16:54:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.842 16:54:51 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:07:35.842 16:54:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:36.100 [2024-04-18 16:54:51.598621] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.100 16:54:51 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:36.357 16:54:51 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.615 [2024-04-18 16:54:52.105193] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.615 16:54:52 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.873 16:54:52 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:37.131 Malloc0 00:07:37.131 16:54:52 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:37.388 Delay0 00:07:37.388 16:54:52 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.648 16:54:53 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:37.648 NULL1 00:07:37.906 16:54:53 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:37.906 16:54:53 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1615106 00:07:37.906 16:54:53 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:37.906 16:54:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:37.906 16:54:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.165 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.165 16:54:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.424 16:54:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:07:38.424 16:54:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:38.681 true 00:07:38.681 16:54:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:38.682 16:54:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.939 16:54:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.197 16:54:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:07:39.197 16:54:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:39.455 true 00:07:39.455 16:54:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:39.455 16:54:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.393 Read completed with error (sct=0, sc=11) 00:07:40.393 16:54:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.651 16:54:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:07:40.651 16:54:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:40.908 true 00:07:40.908 16:54:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:40.908 16:54:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.165 16:54:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.423 16:54:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:07:41.423 16:54:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:41.680 true 00:07:41.680 16:54:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:41.680 16:54:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.618 16:54:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.876 16:54:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:07:42.876 16:54:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:43.133 true 00:07:43.133 16:54:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:43.133 16:54:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.071 16:54:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.071 16:54:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:07:44.071 16:54:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:44.328 true 00:07:44.328 16:54:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:44.328 16:54:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.586 16:55:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.844 16:55:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:07:44.844 16:55:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:45.102 true 00:07:45.102 16:55:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:45.102 16:55:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.036 16:55:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.293 16:55:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:07:46.293 16:55:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:46.551 true 00:07:46.551 16:55:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:46.551 16:55:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.808 16:55:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.066 16:55:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:07:47.066 16:55:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:47.324 true 00:07:47.324 16:55:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:47.324 16:55:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.261 16:55:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.518 16:55:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:07:48.518 16:55:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:48.776 true 00:07:48.776 16:55:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:48.776 16:55:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.034 16:55:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.292 16:55:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:07:49.292 16:55:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:49.292 true 00:07:49.552 16:55:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:49.552 16:55:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.489 16:55:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.489 16:55:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:07:50.489 16:55:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:50.746 true 00:07:50.746 16:55:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:50.746 16:55:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.004 16:55:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.295 16:55:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:07:51.295 16:55:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:51.553 true 00:07:51.553 16:55:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:51.553 16:55:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.485 16:55:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.743 16:55:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:07:52.743 16:55:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:53.000 true 00:07:53.000 16:55:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:53.000 16:55:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.257 16:55:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.515 16:55:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:07:53.515 16:55:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:53.773 true 00:07:53.773 16:55:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:53.773 16:55:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.706 16:55:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.963 16:55:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:07:54.963 16:55:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:55.222 true 00:07:55.222 16:55:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:55.222 16:55:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.479 16:55:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.737 16:55:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:07:55.737 16:55:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:55.994 true 00:07:55.994 16:55:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:55.994 16:55:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.926 16:55:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.926 16:55:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:07:56.926 16:55:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:57.184 true 00:07:57.184 16:55:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:57.184 16:55:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.442 16:55:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.699 16:55:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:07:57.699 16:55:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:57.957 true 00:07:57.957 16:55:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:57.957 16:55:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.890 16:55:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.147 16:55:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:07:59.148 16:55:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:59.148 true 00:07:59.148 16:55:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:59.148 16:55:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.405 16:55:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.661 16:55:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:07:59.661 16:55:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:59.917 true 00:07:59.917 16:55:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:07:59.917 16:55:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.848 16:55:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.105 16:55:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:08:01.105 16:55:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:01.362 true 00:08:01.362 16:55:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:08:01.362 16:55:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.620 16:55:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.878 16:55:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:08:01.878 16:55:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:02.135 true 00:08:02.135 16:55:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:08:02.135 16:55:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.393 16:55:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.650 16:55:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:08:02.650 16:55:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:02.908 true 00:08:02.908 16:55:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:08:02.908 16:55:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.841 16:55:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.356 16:55:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:08:04.356 16:55:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:04.356 true 00:08:04.356 16:55:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:08:04.356 16:55:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.289 16:55:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.547 16:55:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:08:05.547 16:55:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:05.804 true 00:08:05.804 16:55:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:08:05.804 16:55:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.062 16:55:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.320 16:55:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:08:06.320 16:55:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:06.320 true 00:08:06.577 16:55:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:08:06.577 16:55:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.559 16:55:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.559 16:55:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:08:07.559 16:55:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:07.817 true 00:08:07.817 16:55:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:08:07.817 16:55:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.075 16:55:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.332 16:55:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:08:08.332 16:55:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:08.589 true 00:08:08.589 16:55:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:08:08.589 16:55:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.521 Initializing NVMe Controllers 00:08:09.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:09.521 Controller IO queue size 128, less than required. 00:08:09.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:09.521 Controller IO queue size 128, less than required. 00:08:09.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:09.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:09.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:09.521 Initialization complete. Launching workers. 00:08:09.521 ======================================================== 00:08:09.521 Latency(us) 00:08:09.521 Device Information : IOPS MiB/s Average min max 00:08:09.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 952.74 0.47 75440.74 3204.40 1060683.72 00:08:09.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10772.11 5.26 11846.79 3634.98 367030.75 00:08:09.521 ======================================================== 00:08:09.521 Total : 11724.86 5.73 17014.35 3204.40 1060683.72 00:08:09.521 00:08:09.521 16:55:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.778 16:55:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:08:09.778 16:55:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:10.036 true 00:08:10.036 16:55:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1615106 00:08:10.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1615106) - No such process 00:08:10.036 16:55:25 -- target/ns_hotplug_stress.sh@44 -- # wait 1615106 00:08:10.036 16:55:25 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:08:10.036 16:55:25 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:08:10.036 16:55:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:10.036 16:55:25 -- nvmf/common.sh@117 -- # sync 00:08:10.036 16:55:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.036 16:55:25 -- nvmf/common.sh@120 -- # set +e 00:08:10.036 16:55:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.036 16:55:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.036 rmmod nvme_tcp 00:08:10.036 rmmod nvme_fabrics 00:08:10.036 rmmod nvme_keyring 00:08:10.036 16:55:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.036 16:55:25 -- nvmf/common.sh@124 -- # set -e 00:08:10.036 16:55:25 -- nvmf/common.sh@125 -- # return 0 00:08:10.036 16:55:25 -- nvmf/common.sh@478 -- # '[' -n 1614808 ']' 00:08:10.036 16:55:25 -- nvmf/common.sh@479 -- # killprocess 1614808 00:08:10.036 16:55:25 -- common/autotest_common.sh@936 -- # '[' -z 1614808 ']' 00:08:10.036 16:55:25 -- common/autotest_common.sh@940 -- # kill -0 1614808 00:08:10.036 16:55:25 -- common/autotest_common.sh@941 -- # uname 00:08:10.036 16:55:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:10.036 16:55:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1614808 00:08:10.036 16:55:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:10.036 16:55:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:10.036 16:55:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1614808' 00:08:10.036 killing process with pid 1614808 00:08:10.036 16:55:25 -- common/autotest_common.sh@955 -- # kill 1614808 00:08:10.036 16:55:25 -- common/autotest_common.sh@960 -- # wait 1614808 00:08:10.294 16:55:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:10.294 16:55:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:10.294 16:55:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:10.294 16:55:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.294 16:55:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.294 16:55:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.294 16:55:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.294 16:55:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.829 16:55:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:12.829 00:08:12.829 real 0m39.093s 00:08:12.829 user 2m32.488s 00:08:12.829 sys 0m10.516s 00:08:12.829 16:55:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:12.829 16:55:27 -- common/autotest_common.sh@10 -- # set +x 00:08:12.829 ************************************ 00:08:12.829 END TEST nvmf_ns_hotplug_stress 00:08:12.829 ************************************ 00:08:12.829 16:55:27 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:12.829 16:55:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:12.829 16:55:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.829 16:55:27 -- common/autotest_common.sh@10 -- # set +x 00:08:12.829 ************************************ 00:08:12.829 START TEST nvmf_connect_stress 00:08:12.829 ************************************ 00:08:12.829 16:55:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:12.829 * Looking for test storage... 00:08:12.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.829 16:55:28 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.829 16:55:28 -- nvmf/common.sh@7 -- # uname -s 00:08:12.829 16:55:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.829 16:55:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.829 16:55:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.829 16:55:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.829 16:55:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.829 16:55:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.829 16:55:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.829 16:55:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.829 16:55:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.829 16:55:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.829 16:55:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:12.829 16:55:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:12.829 16:55:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.829 16:55:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.829 16:55:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.829 16:55:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.829 16:55:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.829 16:55:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.829 16:55:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.829 16:55:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.829 16:55:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.830 16:55:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.830 16:55:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.830 16:55:28 -- paths/export.sh@5 -- # export PATH 00:08:12.830 16:55:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.830 16:55:28 -- nvmf/common.sh@47 -- # : 0 00:08:12.830 16:55:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.830 16:55:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.830 16:55:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.830 16:55:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.830 16:55:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.830 16:55:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.830 16:55:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.830 16:55:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.830 16:55:28 -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:12.830 16:55:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:12.830 16:55:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.830 16:55:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:12.830 16:55:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:12.830 16:55:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:12.830 16:55:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.830 16:55:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.830 16:55:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.830 16:55:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:12.830 16:55:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:12.830 16:55:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:12.830 16:55:28 -- common/autotest_common.sh@10 -- # set +x 00:08:14.731 16:55:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:14.731 16:55:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:14.731 16:55:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:14.731 16:55:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:14.731 16:55:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:14.731 16:55:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:14.731 16:55:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:14.731 16:55:30 -- nvmf/common.sh@295 -- # net_devs=() 00:08:14.731 16:55:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:14.731 16:55:30 -- nvmf/common.sh@296 -- # e810=() 00:08:14.731 16:55:30 -- nvmf/common.sh@296 -- # local -ga e810 00:08:14.731 16:55:30 -- nvmf/common.sh@297 -- # x722=() 00:08:14.731 16:55:30 -- nvmf/common.sh@297 -- # local -ga x722 00:08:14.731 16:55:30 -- nvmf/common.sh@298 -- # mlx=() 00:08:14.731 16:55:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:14.731 16:55:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.731 16:55:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.731 16:55:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.731 16:55:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.731 16:55:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.731 16:55:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.731 16:55:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.731 16:55:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.731 16:55:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.731 16:55:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.731 16:55:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.731 16:55:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:14.731 16:55:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:14.731 16:55:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:14.731 16:55:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:14.731 16:55:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:14.731 16:55:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:14.731 16:55:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.731 16:55:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:14.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:14.732 16:55:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.732 16:55:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:14.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:14.732 16:55:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:14.732 16:55:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.732 16:55:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.732 16:55:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:14.732 16:55:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.732 16:55:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:14.732 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:14.732 16:55:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.732 16:55:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.732 16:55:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.732 16:55:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:14.732 16:55:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.732 16:55:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:14.732 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:14.732 16:55:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.732 16:55:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:14.732 16:55:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:14.732 16:55:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:14.732 16:55:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.732 16:55:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.732 16:55:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.732 16:55:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:14.732 16:55:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.732 16:55:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.732 16:55:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:14.732 16:55:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.732 16:55:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.732 16:55:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:14.732 16:55:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:14.732 16:55:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.732 16:55:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.732 16:55:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.732 16:55:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.732 16:55:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:14.732 16:55:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.732 16:55:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.732 16:55:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.732 16:55:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:14.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:08:14.732 00:08:14.732 --- 10.0.0.2 ping statistics --- 00:08:14.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.732 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:14.732 16:55:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:08:14.732 00:08:14.732 --- 10.0.0.1 ping statistics --- 00:08:14.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.732 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:08:14.732 16:55:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.732 16:55:30 -- nvmf/common.sh@411 -- # return 0 00:08:14.732 16:55:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:14.732 16:55:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.732 16:55:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:14.732 16:55:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.732 16:55:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:14.732 16:55:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:14.732 16:55:30 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:14.732 16:55:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:14.732 16:55:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:14.732 16:55:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.732 16:55:30 -- nvmf/common.sh@470 -- # nvmfpid=1620971 00:08:14.732 16:55:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:14.732 16:55:30 -- nvmf/common.sh@471 -- # waitforlisten 1620971 00:08:14.732 16:55:30 -- common/autotest_common.sh@817 -- # '[' -z 1620971 ']' 00:08:14.732 16:55:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.732 16:55:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:14.732 16:55:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.732 16:55:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:14.732 16:55:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.732 [2024-04-18 16:55:30.320475] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:08:14.732 [2024-04-18 16:55:30.320570] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.732 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.732 [2024-04-18 16:55:30.386311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:14.990 [2024-04-18 16:55:30.499554] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.990 [2024-04-18 16:55:30.499619] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.990 [2024-04-18 16:55:30.499647] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.990 [2024-04-18 16:55:30.499659] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.991 [2024-04-18 16:55:30.499670] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.991 [2024-04-18 16:55:30.499805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.991 [2024-04-18 16:55:30.499835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.991 [2024-04-18 16:55:30.499838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.991 16:55:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:14.991 16:55:30 -- common/autotest_common.sh@850 -- # return 0 00:08:14.991 16:55:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:14.991 16:55:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:14.991 16:55:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.991 16:55:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.991 16:55:30 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.991 16:55:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.991 16:55:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.991 [2024-04-18 16:55:30.646551] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.991 16:55:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.991 16:55:30 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:14.991 16:55:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.991 16:55:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.991 16:55:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.991 16:55:30 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.991 16:55:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.991 16:55:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.991 [2024-04-18 16:55:30.674556] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.991 16:55:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.991 16:55:30 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:14.991 16:55:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.991 16:55:30 -- common/autotest_common.sh@10 -- # set +x 00:08:14.991 NULL1 00:08:14.991 16:55:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.991 16:55:30 -- target/connect_stress.sh@21 -- # PERF_PID=1620992 00:08:14.991 16:55:30 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:14.991 16:55:30 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:14.991 16:55:30 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:14.991 16:55:30 -- target/connect_stress.sh@27 -- # seq 1 20 00:08:14.991 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:14.991 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:14.991 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:14.991 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:15.248 16:55:30 -- target/connect_stress.sh@28 -- # cat 00:08:15.248 16:55:30 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:15.248 16:55:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:15.248 16:55:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:15.248 16:55:30 -- common/autotest_common.sh@10 -- # set +x 00:08:15.506 16:55:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:15.506 16:55:31 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:15.506 16:55:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:15.506 16:55:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:15.506 16:55:31 -- common/autotest_common.sh@10 -- # set +x 00:08:15.764 16:55:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:15.764 16:55:31 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:15.764 16:55:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:15.764 16:55:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:15.764 16:55:31 -- common/autotest_common.sh@10 -- # set +x 00:08:16.022 16:55:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.022 16:55:31 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:16.022 16:55:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:16.022 16:55:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.022 16:55:31 -- common/autotest_common.sh@10 -- # set +x 00:08:16.587 16:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.587 16:55:32 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:16.587 16:55:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:16.587 16:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.587 16:55:32 -- common/autotest_common.sh@10 -- # set +x 00:08:16.844 16:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.844 16:55:32 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:16.844 16:55:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:16.844 16:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.844 16:55:32 -- common/autotest_common.sh@10 -- # set +x 00:08:17.102 16:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.102 16:55:32 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:17.102 16:55:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.102 16:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.102 16:55:32 -- common/autotest_common.sh@10 -- # set +x 00:08:17.359 16:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.359 16:55:32 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:17.359 16:55:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.359 16:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.359 16:55:32 -- common/autotest_common.sh@10 -- # set +x 00:08:17.616 16:55:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.616 16:55:33 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:17.616 16:55:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.616 16:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.616 16:55:33 -- common/autotest_common.sh@10 -- # set +x 00:08:18.180 16:55:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.180 16:55:33 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:18.180 16:55:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:18.180 16:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.180 16:55:33 -- common/autotest_common.sh@10 -- # set +x 00:08:18.437 16:55:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.437 16:55:33 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:18.437 16:55:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:18.437 16:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.437 16:55:33 -- common/autotest_common.sh@10 -- # set +x 00:08:18.694 16:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.694 16:55:34 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:18.694 16:55:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:18.694 16:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.694 16:55:34 -- common/autotest_common.sh@10 -- # set +x 00:08:18.952 16:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.952 16:55:34 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:18.952 16:55:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:18.952 16:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.952 16:55:34 -- common/autotest_common.sh@10 -- # set +x 00:08:19.209 16:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.209 16:55:34 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:19.209 16:55:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:19.209 16:55:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.209 16:55:34 -- common/autotest_common.sh@10 -- # set +x 00:08:19.774 16:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.774 16:55:35 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:19.774 16:55:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:19.774 16:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.774 16:55:35 -- common/autotest_common.sh@10 -- # set +x 00:08:20.032 16:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.032 16:55:35 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:20.032 16:55:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.032 16:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.032 16:55:35 -- common/autotest_common.sh@10 -- # set +x 00:08:20.289 16:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.289 16:55:35 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:20.289 16:55:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.289 16:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.289 16:55:35 -- common/autotest_common.sh@10 -- # set +x 00:08:20.547 16:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.547 16:55:36 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:20.547 16:55:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.547 16:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.547 16:55:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.112 16:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.112 16:55:36 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:21.112 16:55:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.112 16:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.112 16:55:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.370 16:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.370 16:55:36 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:21.370 16:55:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.370 16:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.370 16:55:36 -- common/autotest_common.sh@10 -- # set +x 00:08:21.627 16:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.627 16:55:37 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:21.627 16:55:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.627 16:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.627 16:55:37 -- common/autotest_common.sh@10 -- # set +x 00:08:21.884 16:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.884 16:55:37 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:21.884 16:55:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.884 16:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.884 16:55:37 -- common/autotest_common.sh@10 -- # set +x 00:08:22.143 16:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.143 16:55:37 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:22.143 16:55:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.143 16:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.143 16:55:37 -- common/autotest_common.sh@10 -- # set +x 00:08:22.708 16:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.708 16:55:38 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:22.708 16:55:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.708 16:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.708 16:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:22.966 16:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.966 16:55:38 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:22.966 16:55:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.966 16:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.966 16:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:23.224 16:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.224 16:55:38 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:23.224 16:55:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.224 16:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.224 16:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:23.482 16:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.482 16:55:39 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:23.482 16:55:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.482 16:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.482 16:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:23.739 16:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.739 16:55:39 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:23.739 16:55:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.739 16:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.739 16:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:24.332 16:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.332 16:55:39 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:24.332 16:55:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.332 16:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.332 16:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:24.591 16:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.591 16:55:40 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:24.591 16:55:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.591 16:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.591 16:55:40 -- common/autotest_common.sh@10 -- # set +x 00:08:24.851 16:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.851 16:55:40 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:24.851 16:55:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.851 16:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.851 16:55:40 -- common/autotest_common.sh@10 -- # set +x 00:08:25.111 16:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.111 16:55:40 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:25.111 16:55:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.111 16:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.111 16:55:40 -- common/autotest_common.sh@10 -- # set +x 00:08:25.111 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:25.370 16:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.370 16:55:41 -- target/connect_stress.sh@34 -- # kill -0 1620992 00:08:25.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1620992) - No such process 00:08:25.370 16:55:41 -- target/connect_stress.sh@38 -- # wait 1620992 00:08:25.370 16:55:41 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:25.370 16:55:41 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:25.370 16:55:41 -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:25.370 16:55:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:25.370 16:55:41 -- nvmf/common.sh@117 -- # sync 00:08:25.370 16:55:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:25.370 16:55:41 -- nvmf/common.sh@120 -- # set +e 00:08:25.370 16:55:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:25.370 16:55:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:25.370 rmmod nvme_tcp 00:08:25.370 rmmod nvme_fabrics 00:08:25.370 rmmod nvme_keyring 00:08:25.370 16:55:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:25.370 16:55:41 -- nvmf/common.sh@124 -- # set -e 00:08:25.370 16:55:41 -- nvmf/common.sh@125 -- # return 0 00:08:25.370 16:55:41 -- nvmf/common.sh@478 -- # '[' -n 1620971 ']' 00:08:25.370 16:55:41 -- nvmf/common.sh@479 -- # killprocess 1620971 00:08:25.370 16:55:41 -- common/autotest_common.sh@936 -- # '[' -z 1620971 ']' 00:08:25.370 16:55:41 -- common/autotest_common.sh@940 -- # kill -0 1620971 00:08:25.370 16:55:41 -- common/autotest_common.sh@941 -- # uname 00:08:25.630 16:55:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:25.630 16:55:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1620971 00:08:25.630 16:55:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:25.630 16:55:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:25.630 16:55:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1620971' 00:08:25.630 killing process with pid 1620971 00:08:25.630 16:55:41 -- common/autotest_common.sh@955 -- # kill 1620971 00:08:25.630 16:55:41 -- common/autotest_common.sh@960 -- # wait 1620971 00:08:25.890 16:55:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:25.890 16:55:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:25.890 16:55:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:25.890 16:55:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:25.890 16:55:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:25.890 16:55:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.890 16:55:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.890 16:55:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.794 16:55:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:27.794 00:08:27.794 real 0m15.340s 00:08:27.794 user 0m38.256s 00:08:27.794 sys 0m5.921s 00:08:27.794 16:55:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:27.794 16:55:43 -- common/autotest_common.sh@10 -- # set +x 00:08:27.794 ************************************ 00:08:27.794 END TEST nvmf_connect_stress 00:08:27.794 ************************************ 00:08:27.794 16:55:43 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:27.794 16:55:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:27.794 16:55:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.794 16:55:43 -- common/autotest_common.sh@10 -- # set +x 00:08:28.053 ************************************ 00:08:28.053 START TEST nvmf_fused_ordering 00:08:28.053 ************************************ 00:08:28.053 16:55:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:28.053 * Looking for test storage... 00:08:28.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.053 16:55:43 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.053 16:55:43 -- nvmf/common.sh@7 -- # uname -s 00:08:28.053 16:55:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.053 16:55:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.053 16:55:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.053 16:55:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.053 16:55:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.053 16:55:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.053 16:55:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.053 16:55:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.053 16:55:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.053 16:55:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.053 16:55:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.053 16:55:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.053 16:55:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.053 16:55:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.053 16:55:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.053 16:55:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.053 16:55:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.053 16:55:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.053 16:55:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.053 16:55:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.053 16:55:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.053 16:55:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.053 16:55:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.053 16:55:43 -- paths/export.sh@5 -- # export PATH 00:08:28.053 16:55:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.053 16:55:43 -- nvmf/common.sh@47 -- # : 0 00:08:28.053 16:55:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.053 16:55:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.053 16:55:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.053 16:55:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.053 16:55:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.053 16:55:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.053 16:55:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.053 16:55:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.053 16:55:43 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:28.053 16:55:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:28.053 16:55:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.053 16:55:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:28.053 16:55:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:28.053 16:55:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:28.053 16:55:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.053 16:55:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.053 16:55:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.053 16:55:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:28.053 16:55:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:28.053 16:55:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:28.053 16:55:43 -- common/autotest_common.sh@10 -- # set +x 00:08:29.957 16:55:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:29.957 16:55:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:29.957 16:55:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:29.957 16:55:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:29.957 16:55:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:29.957 16:55:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:29.957 16:55:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:29.957 16:55:45 -- nvmf/common.sh@295 -- # net_devs=() 00:08:29.957 16:55:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:29.957 16:55:45 -- nvmf/common.sh@296 -- # e810=() 00:08:29.957 16:55:45 -- nvmf/common.sh@296 -- # local -ga e810 00:08:29.957 16:55:45 -- nvmf/common.sh@297 -- # x722=() 00:08:29.957 16:55:45 -- nvmf/common.sh@297 -- # local -ga x722 00:08:29.957 16:55:45 -- nvmf/common.sh@298 -- # mlx=() 00:08:29.957 16:55:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:29.957 16:55:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.957 16:55:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.957 16:55:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.957 16:55:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.957 16:55:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.957 16:55:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.957 16:55:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.957 16:55:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.957 16:55:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.957 16:55:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.958 16:55:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.958 16:55:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:29.958 16:55:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:29.958 16:55:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:29.958 16:55:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.958 16:55:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:29.958 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:29.958 16:55:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.958 16:55:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:29.958 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:29.958 16:55:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:29.958 16:55:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:29.958 16:55:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.958 16:55:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.958 16:55:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:29.958 16:55:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.958 16:55:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:29.958 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:29.958 16:55:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.958 16:55:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.958 16:55:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.958 16:55:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:30.216 16:55:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.216 16:55:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:30.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:30.216 16:55:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.216 16:55:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:30.216 16:55:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:30.216 16:55:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:30.216 16:55:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:30.216 16:55:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:30.216 16:55:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.216 16:55:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.216 16:55:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.216 16:55:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.216 16:55:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.216 16:55:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.216 16:55:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.216 16:55:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.216 16:55:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.216 16:55:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.216 16:55:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.216 16:55:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.216 16:55:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.216 16:55:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.216 16:55:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.216 16:55:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.216 16:55:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.216 16:55:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.216 16:55:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.216 16:55:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:08:30.216 00:08:30.216 --- 10.0.0.2 ping statistics --- 00:08:30.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.216 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:08:30.216 16:55:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:08:30.216 00:08:30.216 --- 10.0.0.1 ping statistics --- 00:08:30.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.216 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:08:30.216 16:55:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.216 16:55:45 -- nvmf/common.sh@411 -- # return 0 00:08:30.216 16:55:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:30.216 16:55:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.216 16:55:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:30.216 16:55:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:30.216 16:55:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.216 16:55:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:30.216 16:55:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:30.216 16:55:45 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:30.216 16:55:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:30.216 16:55:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:30.216 16:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:30.216 16:55:45 -- nvmf/common.sh@470 -- # nvmfpid=1624271 00:08:30.216 16:55:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:30.216 16:55:45 -- nvmf/common.sh@471 -- # waitforlisten 1624271 00:08:30.216 16:55:45 -- common/autotest_common.sh@817 -- # '[' -z 1624271 ']' 00:08:30.216 16:55:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.216 16:55:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:30.216 16:55:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.216 16:55:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:30.216 16:55:45 -- common/autotest_common.sh@10 -- # set +x 00:08:30.216 [2024-04-18 16:55:45.870725] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:08:30.216 [2024-04-18 16:55:45.870816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.216 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.475 [2024-04-18 16:55:45.941121] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.475 [2024-04-18 16:55:46.058331] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.475 [2024-04-18 16:55:46.058419] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.475 [2024-04-18 16:55:46.058445] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.475 [2024-04-18 16:55:46.058459] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.475 [2024-04-18 16:55:46.058470] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.475 [2024-04-18 16:55:46.058508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.411 16:55:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:31.411 16:55:46 -- common/autotest_common.sh@850 -- # return 0 00:08:31.411 16:55:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:31.411 16:55:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:31.411 16:55:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.411 16:55:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.411 16:55:46 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.411 16:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.411 16:55:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.411 [2024-04-18 16:55:46.826719] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.411 16:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.411 16:55:46 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:31.411 16:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.411 16:55:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.411 16:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.411 16:55:46 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.411 16:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.411 16:55:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.411 [2024-04-18 16:55:46.842926] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.411 16:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.411 16:55:46 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:31.411 16:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.411 16:55:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.411 NULL1 00:08:31.411 16:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.411 16:55:46 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:31.411 16:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.411 16:55:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.411 16:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.411 16:55:46 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:31.411 16:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.411 16:55:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.411 16:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.411 16:55:46 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:31.411 [2024-04-18 16:55:46.888059] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:08:31.411 [2024-04-18 16:55:46.888103] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624423 ] 00:08:31.411 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.670 Attached to nqn.2016-06.io.spdk:cnode1 00:08:31.670 Namespace ID: 1 size: 1GB 00:08:31.670 fused_ordering(0) 00:08:31.670 fused_ordering(1) 00:08:31.670 fused_ordering(2) 00:08:31.670 fused_ordering(3) 00:08:31.670 fused_ordering(4) 00:08:31.670 fused_ordering(5) 00:08:31.670 fused_ordering(6) 00:08:31.670 fused_ordering(7) 00:08:31.670 fused_ordering(8) 00:08:31.670 fused_ordering(9) 00:08:31.670 fused_ordering(10) 00:08:31.670 fused_ordering(11) 00:08:31.670 fused_ordering(12) 00:08:31.670 fused_ordering(13) 00:08:31.670 fused_ordering(14) 00:08:31.670 fused_ordering(15) 00:08:31.670 fused_ordering(16) 00:08:31.670 fused_ordering(17) 00:08:31.670 fused_ordering(18) 00:08:31.670 fused_ordering(19) 00:08:31.670 fused_ordering(20) 00:08:31.670 fused_ordering(21) 00:08:31.670 fused_ordering(22) 00:08:31.670 fused_ordering(23) 00:08:31.670 fused_ordering(24) 00:08:31.670 fused_ordering(25) 00:08:31.670 fused_ordering(26) 00:08:31.670 fused_ordering(27) 00:08:31.670 fused_ordering(28) 00:08:31.670 fused_ordering(29) 00:08:31.670 fused_ordering(30) 00:08:31.670 fused_ordering(31) 00:08:31.670 fused_ordering(32) 00:08:31.670 fused_ordering(33) 00:08:31.670 fused_ordering(34) 00:08:31.670 fused_ordering(35) 00:08:31.670 fused_ordering(36) 00:08:31.670 fused_ordering(37) 00:08:31.670 fused_ordering(38) 00:08:31.670 fused_ordering(39) 00:08:31.670 fused_ordering(40) 00:08:31.670 fused_ordering(41) 00:08:31.670 fused_ordering(42) 00:08:31.670 fused_ordering(43) 00:08:31.670 fused_ordering(44) 00:08:31.670 fused_ordering(45) 00:08:31.670 fused_ordering(46) 00:08:31.670 fused_ordering(47) 00:08:31.670 fused_ordering(48) 00:08:31.670 fused_ordering(49) 00:08:31.670 fused_ordering(50) 00:08:31.670 fused_ordering(51) 00:08:31.670 fused_ordering(52) 00:08:31.670 fused_ordering(53) 00:08:31.670 fused_ordering(54) 00:08:31.670 fused_ordering(55) 00:08:31.670 fused_ordering(56) 00:08:31.670 fused_ordering(57) 00:08:31.670 fused_ordering(58) 00:08:31.670 fused_ordering(59) 00:08:31.670 fused_ordering(60) 00:08:31.670 fused_ordering(61) 00:08:31.670 fused_ordering(62) 00:08:31.670 fused_ordering(63) 00:08:31.670 fused_ordering(64) 00:08:31.670 fused_ordering(65) 00:08:31.670 fused_ordering(66) 00:08:31.670 fused_ordering(67) 00:08:31.670 fused_ordering(68) 00:08:31.670 fused_ordering(69) 00:08:31.670 fused_ordering(70) 00:08:31.670 fused_ordering(71) 00:08:31.670 fused_ordering(72) 00:08:31.670 fused_ordering(73) 00:08:31.670 fused_ordering(74) 00:08:31.670 fused_ordering(75) 00:08:31.670 fused_ordering(76) 00:08:31.670 fused_ordering(77) 00:08:31.670 fused_ordering(78) 00:08:31.670 fused_ordering(79) 00:08:31.670 fused_ordering(80) 00:08:31.670 fused_ordering(81) 00:08:31.670 fused_ordering(82) 00:08:31.670 fused_ordering(83) 00:08:31.670 fused_ordering(84) 00:08:31.670 fused_ordering(85) 00:08:31.670 fused_ordering(86) 00:08:31.670 fused_ordering(87) 00:08:31.670 fused_ordering(88) 00:08:31.670 fused_ordering(89) 00:08:31.670 fused_ordering(90) 00:08:31.670 fused_ordering(91) 00:08:31.670 fused_ordering(92) 00:08:31.670 fused_ordering(93) 00:08:31.670 fused_ordering(94) 00:08:31.670 fused_ordering(95) 00:08:31.670 fused_ordering(96) 00:08:31.670 fused_ordering(97) 00:08:31.670 fused_ordering(98) 00:08:31.670 fused_ordering(99) 00:08:31.670 fused_ordering(100) 00:08:31.670 fused_ordering(101) 00:08:31.670 fused_ordering(102) 00:08:31.670 fused_ordering(103) 00:08:31.670 fused_ordering(104) 00:08:31.670 fused_ordering(105) 00:08:31.670 fused_ordering(106) 00:08:31.670 fused_ordering(107) 00:08:31.670 fused_ordering(108) 00:08:31.670 fused_ordering(109) 00:08:31.670 fused_ordering(110) 00:08:31.670 fused_ordering(111) 00:08:31.670 fused_ordering(112) 00:08:31.670 fused_ordering(113) 00:08:31.670 fused_ordering(114) 00:08:31.670 fused_ordering(115) 00:08:31.670 fused_ordering(116) 00:08:31.670 fused_ordering(117) 00:08:31.670 fused_ordering(118) 00:08:31.670 fused_ordering(119) 00:08:31.670 fused_ordering(120) 00:08:31.670 fused_ordering(121) 00:08:31.670 fused_ordering(122) 00:08:31.670 fused_ordering(123) 00:08:31.670 fused_ordering(124) 00:08:31.670 fused_ordering(125) 00:08:31.670 fused_ordering(126) 00:08:31.670 fused_ordering(127) 00:08:31.670 fused_ordering(128) 00:08:31.670 fused_ordering(129) 00:08:31.670 fused_ordering(130) 00:08:31.670 fused_ordering(131) 00:08:31.670 fused_ordering(132) 00:08:31.670 fused_ordering(133) 00:08:31.670 fused_ordering(134) 00:08:31.670 fused_ordering(135) 00:08:31.670 fused_ordering(136) 00:08:31.670 fused_ordering(137) 00:08:31.670 fused_ordering(138) 00:08:31.670 fused_ordering(139) 00:08:31.670 fused_ordering(140) 00:08:31.670 fused_ordering(141) 00:08:31.670 fused_ordering(142) 00:08:31.670 fused_ordering(143) 00:08:31.670 fused_ordering(144) 00:08:31.671 fused_ordering(145) 00:08:31.671 fused_ordering(146) 00:08:31.671 fused_ordering(147) 00:08:31.671 fused_ordering(148) 00:08:31.671 fused_ordering(149) 00:08:31.671 fused_ordering(150) 00:08:31.671 fused_ordering(151) 00:08:31.671 fused_ordering(152) 00:08:31.671 fused_ordering(153) 00:08:31.671 fused_ordering(154) 00:08:31.671 fused_ordering(155) 00:08:31.671 fused_ordering(156) 00:08:31.671 fused_ordering(157) 00:08:31.671 fused_ordering(158) 00:08:31.671 fused_ordering(159) 00:08:31.671 fused_ordering(160) 00:08:31.671 fused_ordering(161) 00:08:31.671 fused_ordering(162) 00:08:31.671 fused_ordering(163) 00:08:31.671 fused_ordering(164) 00:08:31.671 fused_ordering(165) 00:08:31.671 fused_ordering(166) 00:08:31.671 fused_ordering(167) 00:08:31.671 fused_ordering(168) 00:08:31.671 fused_ordering(169) 00:08:31.671 fused_ordering(170) 00:08:31.671 fused_ordering(171) 00:08:31.671 fused_ordering(172) 00:08:31.671 fused_ordering(173) 00:08:31.671 fused_ordering(174) 00:08:31.671 fused_ordering(175) 00:08:31.671 fused_ordering(176) 00:08:31.671 fused_ordering(177) 00:08:31.671 fused_ordering(178) 00:08:31.671 fused_ordering(179) 00:08:31.671 fused_ordering(180) 00:08:31.671 fused_ordering(181) 00:08:31.671 fused_ordering(182) 00:08:31.671 fused_ordering(183) 00:08:31.671 fused_ordering(184) 00:08:31.671 fused_ordering(185) 00:08:31.671 fused_ordering(186) 00:08:31.671 fused_ordering(187) 00:08:31.671 fused_ordering(188) 00:08:31.671 fused_ordering(189) 00:08:31.671 fused_ordering(190) 00:08:31.671 fused_ordering(191) 00:08:31.671 fused_ordering(192) 00:08:31.671 fused_ordering(193) 00:08:31.671 fused_ordering(194) 00:08:31.671 fused_ordering(195) 00:08:31.671 fused_ordering(196) 00:08:31.671 fused_ordering(197) 00:08:31.671 fused_ordering(198) 00:08:31.671 fused_ordering(199) 00:08:31.671 fused_ordering(200) 00:08:31.671 fused_ordering(201) 00:08:31.671 fused_ordering(202) 00:08:31.671 fused_ordering(203) 00:08:31.671 fused_ordering(204) 00:08:31.671 fused_ordering(205) 00:08:32.240 fused_ordering(206) 00:08:32.240 fused_ordering(207) 00:08:32.240 fused_ordering(208) 00:08:32.240 fused_ordering(209) 00:08:32.240 fused_ordering(210) 00:08:32.240 fused_ordering(211) 00:08:32.240 fused_ordering(212) 00:08:32.240 fused_ordering(213) 00:08:32.240 fused_ordering(214) 00:08:32.240 fused_ordering(215) 00:08:32.240 fused_ordering(216) 00:08:32.240 fused_ordering(217) 00:08:32.240 fused_ordering(218) 00:08:32.240 fused_ordering(219) 00:08:32.240 fused_ordering(220) 00:08:32.240 fused_ordering(221) 00:08:32.240 fused_ordering(222) 00:08:32.240 fused_ordering(223) 00:08:32.240 fused_ordering(224) 00:08:32.240 fused_ordering(225) 00:08:32.240 fused_ordering(226) 00:08:32.240 fused_ordering(227) 00:08:32.240 fused_ordering(228) 00:08:32.240 fused_ordering(229) 00:08:32.240 fused_ordering(230) 00:08:32.240 fused_ordering(231) 00:08:32.240 fused_ordering(232) 00:08:32.240 fused_ordering(233) 00:08:32.240 fused_ordering(234) 00:08:32.240 fused_ordering(235) 00:08:32.240 fused_ordering(236) 00:08:32.240 fused_ordering(237) 00:08:32.240 fused_ordering(238) 00:08:32.240 fused_ordering(239) 00:08:32.240 fused_ordering(240) 00:08:32.240 fused_ordering(241) 00:08:32.240 fused_ordering(242) 00:08:32.240 fused_ordering(243) 00:08:32.240 fused_ordering(244) 00:08:32.240 fused_ordering(245) 00:08:32.240 fused_ordering(246) 00:08:32.240 fused_ordering(247) 00:08:32.240 fused_ordering(248) 00:08:32.240 fused_ordering(249) 00:08:32.240 fused_ordering(250) 00:08:32.240 fused_ordering(251) 00:08:32.240 fused_ordering(252) 00:08:32.240 fused_ordering(253) 00:08:32.240 fused_ordering(254) 00:08:32.240 fused_ordering(255) 00:08:32.240 fused_ordering(256) 00:08:32.240 fused_ordering(257) 00:08:32.240 fused_ordering(258) 00:08:32.240 fused_ordering(259) 00:08:32.240 fused_ordering(260) 00:08:32.240 fused_ordering(261) 00:08:32.240 fused_ordering(262) 00:08:32.240 fused_ordering(263) 00:08:32.240 fused_ordering(264) 00:08:32.240 fused_ordering(265) 00:08:32.240 fused_ordering(266) 00:08:32.240 fused_ordering(267) 00:08:32.240 fused_ordering(268) 00:08:32.240 fused_ordering(269) 00:08:32.240 fused_ordering(270) 00:08:32.240 fused_ordering(271) 00:08:32.240 fused_ordering(272) 00:08:32.240 fused_ordering(273) 00:08:32.240 fused_ordering(274) 00:08:32.240 fused_ordering(275) 00:08:32.240 fused_ordering(276) 00:08:32.240 fused_ordering(277) 00:08:32.240 fused_ordering(278) 00:08:32.240 fused_ordering(279) 00:08:32.240 fused_ordering(280) 00:08:32.240 fused_ordering(281) 00:08:32.240 fused_ordering(282) 00:08:32.240 fused_ordering(283) 00:08:32.240 fused_ordering(284) 00:08:32.240 fused_ordering(285) 00:08:32.240 fused_ordering(286) 00:08:32.240 fused_ordering(287) 00:08:32.240 fused_ordering(288) 00:08:32.240 fused_ordering(289) 00:08:32.240 fused_ordering(290) 00:08:32.240 fused_ordering(291) 00:08:32.240 fused_ordering(292) 00:08:32.240 fused_ordering(293) 00:08:32.240 fused_ordering(294) 00:08:32.240 fused_ordering(295) 00:08:32.240 fused_ordering(296) 00:08:32.240 fused_ordering(297) 00:08:32.240 fused_ordering(298) 00:08:32.240 fused_ordering(299) 00:08:32.240 fused_ordering(300) 00:08:32.240 fused_ordering(301) 00:08:32.240 fused_ordering(302) 00:08:32.240 fused_ordering(303) 00:08:32.240 fused_ordering(304) 00:08:32.240 fused_ordering(305) 00:08:32.240 fused_ordering(306) 00:08:32.240 fused_ordering(307) 00:08:32.240 fused_ordering(308) 00:08:32.240 fused_ordering(309) 00:08:32.240 fused_ordering(310) 00:08:32.240 fused_ordering(311) 00:08:32.240 fused_ordering(312) 00:08:32.240 fused_ordering(313) 00:08:32.240 fused_ordering(314) 00:08:32.240 fused_ordering(315) 00:08:32.240 fused_ordering(316) 00:08:32.240 fused_ordering(317) 00:08:32.240 fused_ordering(318) 00:08:32.240 fused_ordering(319) 00:08:32.240 fused_ordering(320) 00:08:32.240 fused_ordering(321) 00:08:32.240 fused_ordering(322) 00:08:32.240 fused_ordering(323) 00:08:32.240 fused_ordering(324) 00:08:32.240 fused_ordering(325) 00:08:32.240 fused_ordering(326) 00:08:32.240 fused_ordering(327) 00:08:32.240 fused_ordering(328) 00:08:32.240 fused_ordering(329) 00:08:32.240 fused_ordering(330) 00:08:32.240 fused_ordering(331) 00:08:32.240 fused_ordering(332) 00:08:32.241 fused_ordering(333) 00:08:32.241 fused_ordering(334) 00:08:32.241 fused_ordering(335) 00:08:32.241 fused_ordering(336) 00:08:32.241 fused_ordering(337) 00:08:32.241 fused_ordering(338) 00:08:32.241 fused_ordering(339) 00:08:32.241 fused_ordering(340) 00:08:32.241 fused_ordering(341) 00:08:32.241 fused_ordering(342) 00:08:32.241 fused_ordering(343) 00:08:32.241 fused_ordering(344) 00:08:32.241 fused_ordering(345) 00:08:32.241 fused_ordering(346) 00:08:32.241 fused_ordering(347) 00:08:32.241 fused_ordering(348) 00:08:32.241 fused_ordering(349) 00:08:32.241 fused_ordering(350) 00:08:32.241 fused_ordering(351) 00:08:32.241 fused_ordering(352) 00:08:32.241 fused_ordering(353) 00:08:32.241 fused_ordering(354) 00:08:32.241 fused_ordering(355) 00:08:32.241 fused_ordering(356) 00:08:32.241 fused_ordering(357) 00:08:32.241 fused_ordering(358) 00:08:32.241 fused_ordering(359) 00:08:32.241 fused_ordering(360) 00:08:32.241 fused_ordering(361) 00:08:32.241 fused_ordering(362) 00:08:32.241 fused_ordering(363) 00:08:32.241 fused_ordering(364) 00:08:32.241 fused_ordering(365) 00:08:32.241 fused_ordering(366) 00:08:32.241 fused_ordering(367) 00:08:32.241 fused_ordering(368) 00:08:32.241 fused_ordering(369) 00:08:32.241 fused_ordering(370) 00:08:32.241 fused_ordering(371) 00:08:32.241 fused_ordering(372) 00:08:32.241 fused_ordering(373) 00:08:32.241 fused_ordering(374) 00:08:32.241 fused_ordering(375) 00:08:32.241 fused_ordering(376) 00:08:32.241 fused_ordering(377) 00:08:32.241 fused_ordering(378) 00:08:32.241 fused_ordering(379) 00:08:32.241 fused_ordering(380) 00:08:32.241 fused_ordering(381) 00:08:32.241 fused_ordering(382) 00:08:32.241 fused_ordering(383) 00:08:32.241 fused_ordering(384) 00:08:32.241 fused_ordering(385) 00:08:32.241 fused_ordering(386) 00:08:32.241 fused_ordering(387) 00:08:32.241 fused_ordering(388) 00:08:32.241 fused_ordering(389) 00:08:32.241 fused_ordering(390) 00:08:32.241 fused_ordering(391) 00:08:32.241 fused_ordering(392) 00:08:32.241 fused_ordering(393) 00:08:32.241 fused_ordering(394) 00:08:32.241 fused_ordering(395) 00:08:32.241 fused_ordering(396) 00:08:32.241 fused_ordering(397) 00:08:32.241 fused_ordering(398) 00:08:32.241 fused_ordering(399) 00:08:32.241 fused_ordering(400) 00:08:32.241 fused_ordering(401) 00:08:32.241 fused_ordering(402) 00:08:32.241 fused_ordering(403) 00:08:32.241 fused_ordering(404) 00:08:32.241 fused_ordering(405) 00:08:32.241 fused_ordering(406) 00:08:32.241 fused_ordering(407) 00:08:32.241 fused_ordering(408) 00:08:32.241 fused_ordering(409) 00:08:32.241 fused_ordering(410) 00:08:32.809 fused_ordering(411) 00:08:32.809 fused_ordering(412) 00:08:32.809 fused_ordering(413) 00:08:32.809 fused_ordering(414) 00:08:32.809 fused_ordering(415) 00:08:32.809 fused_ordering(416) 00:08:32.809 fused_ordering(417) 00:08:32.809 fused_ordering(418) 00:08:32.809 fused_ordering(419) 00:08:32.809 fused_ordering(420) 00:08:32.809 fused_ordering(421) 00:08:32.809 fused_ordering(422) 00:08:32.809 fused_ordering(423) 00:08:32.809 fused_ordering(424) 00:08:32.809 fused_ordering(425) 00:08:32.809 fused_ordering(426) 00:08:32.809 fused_ordering(427) 00:08:32.809 fused_ordering(428) 00:08:32.809 fused_ordering(429) 00:08:32.809 fused_ordering(430) 00:08:32.809 fused_ordering(431) 00:08:32.809 fused_ordering(432) 00:08:32.809 fused_ordering(433) 00:08:32.809 fused_ordering(434) 00:08:32.809 fused_ordering(435) 00:08:32.809 fused_ordering(436) 00:08:32.809 fused_ordering(437) 00:08:32.809 fused_ordering(438) 00:08:32.809 fused_ordering(439) 00:08:32.809 fused_ordering(440) 00:08:32.809 fused_ordering(441) 00:08:32.809 fused_ordering(442) 00:08:32.809 fused_ordering(443) 00:08:32.809 fused_ordering(444) 00:08:32.809 fused_ordering(445) 00:08:32.809 fused_ordering(446) 00:08:32.809 fused_ordering(447) 00:08:32.809 fused_ordering(448) 00:08:32.809 fused_ordering(449) 00:08:32.809 fused_ordering(450) 00:08:32.809 fused_ordering(451) 00:08:32.809 fused_ordering(452) 00:08:32.809 fused_ordering(453) 00:08:32.809 fused_ordering(454) 00:08:32.809 fused_ordering(455) 00:08:32.809 fused_ordering(456) 00:08:32.809 fused_ordering(457) 00:08:32.809 fused_ordering(458) 00:08:32.809 fused_ordering(459) 00:08:32.809 fused_ordering(460) 00:08:32.809 fused_ordering(461) 00:08:32.809 fused_ordering(462) 00:08:32.809 fused_ordering(463) 00:08:32.809 fused_ordering(464) 00:08:32.809 fused_ordering(465) 00:08:32.809 fused_ordering(466) 00:08:32.809 fused_ordering(467) 00:08:32.809 fused_ordering(468) 00:08:32.809 fused_ordering(469) 00:08:32.809 fused_ordering(470) 00:08:32.809 fused_ordering(471) 00:08:32.809 fused_ordering(472) 00:08:32.809 fused_ordering(473) 00:08:32.809 fused_ordering(474) 00:08:32.809 fused_ordering(475) 00:08:32.809 fused_ordering(476) 00:08:32.809 fused_ordering(477) 00:08:32.809 fused_ordering(478) 00:08:32.809 fused_ordering(479) 00:08:32.809 fused_ordering(480) 00:08:32.809 fused_ordering(481) 00:08:32.809 fused_ordering(482) 00:08:32.809 fused_ordering(483) 00:08:32.809 fused_ordering(484) 00:08:32.809 fused_ordering(485) 00:08:32.809 fused_ordering(486) 00:08:32.809 fused_ordering(487) 00:08:32.809 fused_ordering(488) 00:08:32.809 fused_ordering(489) 00:08:32.809 fused_ordering(490) 00:08:32.809 fused_ordering(491) 00:08:32.809 fused_ordering(492) 00:08:32.809 fused_ordering(493) 00:08:32.809 fused_ordering(494) 00:08:32.809 fused_ordering(495) 00:08:32.809 fused_ordering(496) 00:08:32.809 fused_ordering(497) 00:08:32.809 fused_ordering(498) 00:08:32.809 fused_ordering(499) 00:08:32.809 fused_ordering(500) 00:08:32.809 fused_ordering(501) 00:08:32.809 fused_ordering(502) 00:08:32.809 fused_ordering(503) 00:08:32.809 fused_ordering(504) 00:08:32.809 fused_ordering(505) 00:08:32.809 fused_ordering(506) 00:08:32.809 fused_ordering(507) 00:08:32.809 fused_ordering(508) 00:08:32.809 fused_ordering(509) 00:08:32.809 fused_ordering(510) 00:08:32.809 fused_ordering(511) 00:08:32.809 fused_ordering(512) 00:08:32.809 fused_ordering(513) 00:08:32.809 fused_ordering(514) 00:08:32.809 fused_ordering(515) 00:08:32.809 fused_ordering(516) 00:08:32.809 fused_ordering(517) 00:08:32.809 fused_ordering(518) 00:08:32.809 fused_ordering(519) 00:08:32.809 fused_ordering(520) 00:08:32.809 fused_ordering(521) 00:08:32.809 fused_ordering(522) 00:08:32.809 fused_ordering(523) 00:08:32.809 fused_ordering(524) 00:08:32.809 fused_ordering(525) 00:08:32.809 fused_ordering(526) 00:08:32.809 fused_ordering(527) 00:08:32.809 fused_ordering(528) 00:08:32.809 fused_ordering(529) 00:08:32.809 fused_ordering(530) 00:08:32.809 fused_ordering(531) 00:08:32.809 fused_ordering(532) 00:08:32.809 fused_ordering(533) 00:08:32.809 fused_ordering(534) 00:08:32.809 fused_ordering(535) 00:08:32.809 fused_ordering(536) 00:08:32.809 fused_ordering(537) 00:08:32.809 fused_ordering(538) 00:08:32.809 fused_ordering(539) 00:08:32.809 fused_ordering(540) 00:08:32.809 fused_ordering(541) 00:08:32.809 fused_ordering(542) 00:08:32.809 fused_ordering(543) 00:08:32.809 fused_ordering(544) 00:08:32.809 fused_ordering(545) 00:08:32.809 fused_ordering(546) 00:08:32.809 fused_ordering(547) 00:08:32.809 fused_ordering(548) 00:08:32.809 fused_ordering(549) 00:08:32.809 fused_ordering(550) 00:08:32.809 fused_ordering(551) 00:08:32.809 fused_ordering(552) 00:08:32.809 fused_ordering(553) 00:08:32.809 fused_ordering(554) 00:08:32.809 fused_ordering(555) 00:08:32.809 fused_ordering(556) 00:08:32.809 fused_ordering(557) 00:08:32.809 fused_ordering(558) 00:08:32.809 fused_ordering(559) 00:08:32.809 fused_ordering(560) 00:08:32.809 fused_ordering(561) 00:08:32.809 fused_ordering(562) 00:08:32.809 fused_ordering(563) 00:08:32.809 fused_ordering(564) 00:08:32.809 fused_ordering(565) 00:08:32.809 fused_ordering(566) 00:08:32.809 fused_ordering(567) 00:08:32.809 fused_ordering(568) 00:08:32.809 fused_ordering(569) 00:08:32.809 fused_ordering(570) 00:08:32.809 fused_ordering(571) 00:08:32.809 fused_ordering(572) 00:08:32.809 fused_ordering(573) 00:08:32.809 fused_ordering(574) 00:08:32.809 fused_ordering(575) 00:08:32.809 fused_ordering(576) 00:08:32.809 fused_ordering(577) 00:08:32.809 fused_ordering(578) 00:08:32.809 fused_ordering(579) 00:08:32.809 fused_ordering(580) 00:08:32.809 fused_ordering(581) 00:08:32.809 fused_ordering(582) 00:08:32.809 fused_ordering(583) 00:08:32.809 fused_ordering(584) 00:08:32.809 fused_ordering(585) 00:08:32.809 fused_ordering(586) 00:08:32.809 fused_ordering(587) 00:08:32.809 fused_ordering(588) 00:08:32.809 fused_ordering(589) 00:08:32.809 fused_ordering(590) 00:08:32.809 fused_ordering(591) 00:08:32.809 fused_ordering(592) 00:08:32.809 fused_ordering(593) 00:08:32.809 fused_ordering(594) 00:08:32.809 fused_ordering(595) 00:08:32.809 fused_ordering(596) 00:08:32.809 fused_ordering(597) 00:08:32.809 fused_ordering(598) 00:08:32.809 fused_ordering(599) 00:08:32.809 fused_ordering(600) 00:08:32.809 fused_ordering(601) 00:08:32.809 fused_ordering(602) 00:08:32.809 fused_ordering(603) 00:08:32.809 fused_ordering(604) 00:08:32.809 fused_ordering(605) 00:08:32.809 fused_ordering(606) 00:08:32.809 fused_ordering(607) 00:08:32.809 fused_ordering(608) 00:08:32.809 fused_ordering(609) 00:08:32.809 fused_ordering(610) 00:08:32.809 fused_ordering(611) 00:08:32.809 fused_ordering(612) 00:08:32.810 fused_ordering(613) 00:08:32.810 fused_ordering(614) 00:08:32.810 fused_ordering(615) 00:08:33.377 fused_ordering(616) 00:08:33.377 fused_ordering(617) 00:08:33.377 fused_ordering(618) 00:08:33.377 fused_ordering(619) 00:08:33.377 fused_ordering(620) 00:08:33.377 fused_ordering(621) 00:08:33.377 fused_ordering(622) 00:08:33.377 fused_ordering(623) 00:08:33.377 fused_ordering(624) 00:08:33.377 fused_ordering(625) 00:08:33.377 fused_ordering(626) 00:08:33.377 fused_ordering(627) 00:08:33.377 fused_ordering(628) 00:08:33.377 fused_ordering(629) 00:08:33.377 fused_ordering(630) 00:08:33.377 fused_ordering(631) 00:08:33.377 fused_ordering(632) 00:08:33.377 fused_ordering(633) 00:08:33.377 fused_ordering(634) 00:08:33.377 fused_ordering(635) 00:08:33.377 fused_ordering(636) 00:08:33.377 fused_ordering(637) 00:08:33.377 fused_ordering(638) 00:08:33.377 fused_ordering(639) 00:08:33.377 fused_ordering(640) 00:08:33.377 fused_ordering(641) 00:08:33.377 fused_ordering(642) 00:08:33.377 fused_ordering(643) 00:08:33.377 fused_ordering(644) 00:08:33.377 fused_ordering(645) 00:08:33.377 fused_ordering(646) 00:08:33.377 fused_ordering(647) 00:08:33.377 fused_ordering(648) 00:08:33.377 fused_ordering(649) 00:08:33.377 fused_ordering(650) 00:08:33.377 fused_ordering(651) 00:08:33.377 fused_ordering(652) 00:08:33.377 fused_ordering(653) 00:08:33.377 fused_ordering(654) 00:08:33.377 fused_ordering(655) 00:08:33.377 fused_ordering(656) 00:08:33.377 fused_ordering(657) 00:08:33.377 fused_ordering(658) 00:08:33.377 fused_ordering(659) 00:08:33.377 fused_ordering(660) 00:08:33.377 fused_ordering(661) 00:08:33.377 fused_ordering(662) 00:08:33.377 fused_ordering(663) 00:08:33.377 fused_ordering(664) 00:08:33.377 fused_ordering(665) 00:08:33.377 fused_ordering(666) 00:08:33.377 fused_ordering(667) 00:08:33.377 fused_ordering(668) 00:08:33.377 fused_ordering(669) 00:08:33.377 fused_ordering(670) 00:08:33.377 fused_ordering(671) 00:08:33.377 fused_ordering(672) 00:08:33.377 fused_ordering(673) 00:08:33.377 fused_ordering(674) 00:08:33.377 fused_ordering(675) 00:08:33.377 fused_ordering(676) 00:08:33.377 fused_ordering(677) 00:08:33.377 fused_ordering(678) 00:08:33.377 fused_ordering(679) 00:08:33.377 fused_ordering(680) 00:08:33.377 fused_ordering(681) 00:08:33.377 fused_ordering(682) 00:08:33.377 fused_ordering(683) 00:08:33.377 fused_ordering(684) 00:08:33.377 fused_ordering(685) 00:08:33.377 fused_ordering(686) 00:08:33.377 fused_ordering(687) 00:08:33.377 fused_ordering(688) 00:08:33.377 fused_ordering(689) 00:08:33.377 fused_ordering(690) 00:08:33.377 fused_ordering(691) 00:08:33.377 fused_ordering(692) 00:08:33.377 fused_ordering(693) 00:08:33.377 fused_ordering(694) 00:08:33.377 fused_ordering(695) 00:08:33.377 fused_ordering(696) 00:08:33.377 fused_ordering(697) 00:08:33.377 fused_ordering(698) 00:08:33.377 fused_ordering(699) 00:08:33.377 fused_ordering(700) 00:08:33.377 fused_ordering(701) 00:08:33.377 fused_ordering(702) 00:08:33.377 fused_ordering(703) 00:08:33.377 fused_ordering(704) 00:08:33.377 fused_ordering(705) 00:08:33.377 fused_ordering(706) 00:08:33.377 fused_ordering(707) 00:08:33.377 fused_ordering(708) 00:08:33.377 fused_ordering(709) 00:08:33.377 fused_ordering(710) 00:08:33.377 fused_ordering(711) 00:08:33.377 fused_ordering(712) 00:08:33.377 fused_ordering(713) 00:08:33.377 fused_ordering(714) 00:08:33.377 fused_ordering(715) 00:08:33.377 fused_ordering(716) 00:08:33.377 fused_ordering(717) 00:08:33.377 fused_ordering(718) 00:08:33.377 fused_ordering(719) 00:08:33.377 fused_ordering(720) 00:08:33.377 fused_ordering(721) 00:08:33.377 fused_ordering(722) 00:08:33.377 fused_ordering(723) 00:08:33.377 fused_ordering(724) 00:08:33.377 fused_ordering(725) 00:08:33.377 fused_ordering(726) 00:08:33.377 fused_ordering(727) 00:08:33.377 fused_ordering(728) 00:08:33.377 fused_ordering(729) 00:08:33.377 fused_ordering(730) 00:08:33.377 fused_ordering(731) 00:08:33.377 fused_ordering(732) 00:08:33.377 fused_ordering(733) 00:08:33.377 fused_ordering(734) 00:08:33.377 fused_ordering(735) 00:08:33.377 fused_ordering(736) 00:08:33.377 fused_ordering(737) 00:08:33.377 fused_ordering(738) 00:08:33.377 fused_ordering(739) 00:08:33.377 fused_ordering(740) 00:08:33.377 fused_ordering(741) 00:08:33.377 fused_ordering(742) 00:08:33.377 fused_ordering(743) 00:08:33.377 fused_ordering(744) 00:08:33.377 fused_ordering(745) 00:08:33.377 fused_ordering(746) 00:08:33.377 fused_ordering(747) 00:08:33.377 fused_ordering(748) 00:08:33.377 fused_ordering(749) 00:08:33.377 fused_ordering(750) 00:08:33.377 fused_ordering(751) 00:08:33.377 fused_ordering(752) 00:08:33.377 fused_ordering(753) 00:08:33.377 fused_ordering(754) 00:08:33.377 fused_ordering(755) 00:08:33.377 fused_ordering(756) 00:08:33.377 fused_ordering(757) 00:08:33.377 fused_ordering(758) 00:08:33.377 fused_ordering(759) 00:08:33.377 fused_ordering(760) 00:08:33.377 fused_ordering(761) 00:08:33.377 fused_ordering(762) 00:08:33.377 fused_ordering(763) 00:08:33.377 fused_ordering(764) 00:08:33.377 fused_ordering(765) 00:08:33.377 fused_ordering(766) 00:08:33.377 fused_ordering(767) 00:08:33.377 fused_ordering(768) 00:08:33.377 fused_ordering(769) 00:08:33.377 fused_ordering(770) 00:08:33.377 fused_ordering(771) 00:08:33.377 fused_ordering(772) 00:08:33.377 fused_ordering(773) 00:08:33.377 fused_ordering(774) 00:08:33.377 fused_ordering(775) 00:08:33.377 fused_ordering(776) 00:08:33.377 fused_ordering(777) 00:08:33.377 fused_ordering(778) 00:08:33.377 fused_ordering(779) 00:08:33.377 fused_ordering(780) 00:08:33.377 fused_ordering(781) 00:08:33.377 fused_ordering(782) 00:08:33.377 fused_ordering(783) 00:08:33.377 fused_ordering(784) 00:08:33.377 fused_ordering(785) 00:08:33.377 fused_ordering(786) 00:08:33.377 fused_ordering(787) 00:08:33.377 fused_ordering(788) 00:08:33.377 fused_ordering(789) 00:08:33.377 fused_ordering(790) 00:08:33.377 fused_ordering(791) 00:08:33.377 fused_ordering(792) 00:08:33.377 fused_ordering(793) 00:08:33.377 fused_ordering(794) 00:08:33.377 fused_ordering(795) 00:08:33.377 fused_ordering(796) 00:08:33.377 fused_ordering(797) 00:08:33.377 fused_ordering(798) 00:08:33.377 fused_ordering(799) 00:08:33.377 fused_ordering(800) 00:08:33.377 fused_ordering(801) 00:08:33.377 fused_ordering(802) 00:08:33.377 fused_ordering(803) 00:08:33.377 fused_ordering(804) 00:08:33.377 fused_ordering(805) 00:08:33.377 fused_ordering(806) 00:08:33.377 fused_ordering(807) 00:08:33.377 fused_ordering(808) 00:08:33.377 fused_ordering(809) 00:08:33.377 fused_ordering(810) 00:08:33.377 fused_ordering(811) 00:08:33.377 fused_ordering(812) 00:08:33.377 fused_ordering(813) 00:08:33.377 fused_ordering(814) 00:08:33.377 fused_ordering(815) 00:08:33.377 fused_ordering(816) 00:08:33.377 fused_ordering(817) 00:08:33.377 fused_ordering(818) 00:08:33.378 fused_ordering(819) 00:08:33.378 fused_ordering(820) 00:08:33.945 fused_ordering(821) 00:08:33.945 fused_ordering(822) 00:08:33.945 fused_ordering(823) 00:08:33.945 fused_ordering(824) 00:08:33.945 fused_ordering(825) 00:08:33.945 fused_ordering(826) 00:08:33.945 fused_ordering(827) 00:08:33.945 fused_ordering(828) 00:08:33.945 fused_ordering(829) 00:08:33.945 fused_ordering(830) 00:08:33.945 fused_ordering(831) 00:08:33.945 fused_ordering(832) 00:08:33.945 fused_ordering(833) 00:08:33.945 fused_ordering(834) 00:08:33.945 fused_ordering(835) 00:08:33.945 fused_ordering(836) 00:08:33.945 fused_ordering(837) 00:08:33.945 fused_ordering(838) 00:08:33.945 fused_ordering(839) 00:08:33.945 fused_ordering(840) 00:08:33.945 fused_ordering(841) 00:08:33.945 fused_ordering(842) 00:08:33.945 fused_ordering(843) 00:08:33.945 fused_ordering(844) 00:08:33.945 fused_ordering(845) 00:08:33.945 fused_ordering(846) 00:08:33.945 fused_ordering(847) 00:08:33.945 fused_ordering(848) 00:08:33.945 fused_ordering(849) 00:08:33.945 fused_ordering(850) 00:08:33.945 fused_ordering(851) 00:08:33.945 fused_ordering(852) 00:08:33.945 fused_ordering(853) 00:08:33.945 fused_ordering(854) 00:08:33.945 fused_ordering(855) 00:08:33.945 fused_ordering(856) 00:08:33.945 fused_ordering(857) 00:08:33.945 fused_ordering(858) 00:08:33.945 fused_ordering(859) 00:08:33.945 fused_ordering(860) 00:08:33.945 fused_ordering(861) 00:08:33.945 fused_ordering(862) 00:08:33.945 fused_ordering(863) 00:08:33.945 fused_ordering(864) 00:08:33.945 fused_ordering(865) 00:08:33.945 fused_ordering(866) 00:08:33.945 fused_ordering(867) 00:08:33.945 fused_ordering(868) 00:08:33.945 fused_ordering(869) 00:08:33.945 fused_ordering(870) 00:08:33.945 fused_ordering(871) 00:08:33.945 fused_ordering(872) 00:08:33.945 fused_ordering(873) 00:08:33.945 fused_ordering(874) 00:08:33.945 fused_ordering(875) 00:08:33.945 fused_ordering(876) 00:08:33.945 fused_ordering(877) 00:08:33.945 fused_ordering(878) 00:08:33.945 fused_ordering(879) 00:08:33.945 fused_ordering(880) 00:08:33.945 fused_ordering(881) 00:08:33.945 fused_ordering(882) 00:08:33.945 fused_ordering(883) 00:08:33.945 fused_ordering(884) 00:08:33.945 fused_ordering(885) 00:08:33.945 fused_ordering(886) 00:08:33.945 fused_ordering(887) 00:08:33.945 fused_ordering(888) 00:08:33.945 fused_ordering(889) 00:08:33.945 fused_ordering(890) 00:08:33.945 fused_ordering(891) 00:08:33.945 fused_ordering(892) 00:08:33.945 fused_ordering(893) 00:08:33.945 fused_ordering(894) 00:08:33.945 fused_ordering(895) 00:08:33.945 fused_ordering(896) 00:08:33.945 fused_ordering(897) 00:08:33.945 fused_ordering(898) 00:08:33.945 fused_ordering(899) 00:08:33.945 fused_ordering(900) 00:08:33.945 fused_ordering(901) 00:08:33.945 fused_ordering(902) 00:08:33.945 fused_ordering(903) 00:08:33.945 fused_ordering(904) 00:08:33.945 fused_ordering(905) 00:08:33.945 fused_ordering(906) 00:08:33.945 fused_ordering(907) 00:08:33.945 fused_ordering(908) 00:08:33.945 fused_ordering(909) 00:08:33.945 fused_ordering(910) 00:08:33.945 fused_ordering(911) 00:08:33.945 fused_ordering(912) 00:08:33.945 fused_ordering(913) 00:08:33.945 fused_ordering(914) 00:08:33.945 fused_ordering(915) 00:08:33.945 fused_ordering(916) 00:08:33.945 fused_ordering(917) 00:08:33.945 fused_ordering(918) 00:08:33.945 fused_ordering(919) 00:08:33.945 fused_ordering(920) 00:08:33.945 fused_ordering(921) 00:08:33.945 fused_ordering(922) 00:08:33.945 fused_ordering(923) 00:08:33.945 fused_ordering(924) 00:08:33.945 fused_ordering(925) 00:08:33.945 fused_ordering(926) 00:08:33.945 fused_ordering(927) 00:08:33.945 fused_ordering(928) 00:08:33.945 fused_ordering(929) 00:08:33.945 fused_ordering(930) 00:08:33.945 fused_ordering(931) 00:08:33.945 fused_ordering(932) 00:08:33.945 fused_ordering(933) 00:08:33.945 fused_ordering(934) 00:08:33.945 fused_ordering(935) 00:08:33.945 fused_ordering(936) 00:08:33.945 fused_ordering(937) 00:08:33.945 fused_ordering(938) 00:08:33.945 fused_ordering(939) 00:08:33.945 fused_ordering(940) 00:08:33.945 fused_ordering(941) 00:08:33.945 fused_ordering(942) 00:08:33.945 fused_ordering(943) 00:08:33.945 fused_ordering(944) 00:08:33.945 fused_ordering(945) 00:08:33.945 fused_ordering(946) 00:08:33.945 fused_ordering(947) 00:08:33.945 fused_ordering(948) 00:08:33.945 fused_ordering(949) 00:08:33.945 fused_ordering(950) 00:08:33.945 fused_ordering(951) 00:08:33.945 fused_ordering(952) 00:08:33.945 fused_ordering(953) 00:08:33.945 fused_ordering(954) 00:08:33.945 fused_ordering(955) 00:08:33.945 fused_ordering(956) 00:08:33.945 fused_ordering(957) 00:08:33.945 fused_ordering(958) 00:08:33.945 fused_ordering(959) 00:08:33.945 fused_ordering(960) 00:08:33.945 fused_ordering(961) 00:08:33.945 fused_ordering(962) 00:08:33.945 fused_ordering(963) 00:08:33.945 fused_ordering(964) 00:08:33.945 fused_ordering(965) 00:08:33.945 fused_ordering(966) 00:08:33.945 fused_ordering(967) 00:08:33.945 fused_ordering(968) 00:08:33.945 fused_ordering(969) 00:08:33.945 fused_ordering(970) 00:08:33.945 fused_ordering(971) 00:08:33.945 fused_ordering(972) 00:08:33.945 fused_ordering(973) 00:08:33.945 fused_ordering(974) 00:08:33.945 fused_ordering(975) 00:08:33.945 fused_ordering(976) 00:08:33.945 fused_ordering(977) 00:08:33.945 fused_ordering(978) 00:08:33.945 fused_ordering(979) 00:08:33.945 fused_ordering(980) 00:08:33.945 fused_ordering(981) 00:08:33.945 fused_ordering(982) 00:08:33.945 fused_ordering(983) 00:08:33.945 fused_ordering(984) 00:08:33.945 fused_ordering(985) 00:08:33.945 fused_ordering(986) 00:08:33.945 fused_ordering(987) 00:08:33.945 fused_ordering(988) 00:08:33.945 fused_ordering(989) 00:08:33.945 fused_ordering(990) 00:08:33.945 fused_ordering(991) 00:08:33.945 fused_ordering(992) 00:08:33.945 fused_ordering(993) 00:08:33.945 fused_ordering(994) 00:08:33.945 fused_ordering(995) 00:08:33.945 fused_ordering(996) 00:08:33.945 fused_ordering(997) 00:08:33.945 fused_ordering(998) 00:08:33.945 fused_ordering(999) 00:08:33.945 fused_ordering(1000) 00:08:33.945 fused_ordering(1001) 00:08:33.945 fused_ordering(1002) 00:08:33.945 fused_ordering(1003) 00:08:33.945 fused_ordering(1004) 00:08:33.945 fused_ordering(1005) 00:08:33.945 fused_ordering(1006) 00:08:33.945 fused_ordering(1007) 00:08:33.945 fused_ordering(1008) 00:08:33.945 fused_ordering(1009) 00:08:33.945 fused_ordering(1010) 00:08:33.945 fused_ordering(1011) 00:08:33.945 fused_ordering(1012) 00:08:33.945 fused_ordering(1013) 00:08:33.945 fused_ordering(1014) 00:08:33.945 fused_ordering(1015) 00:08:33.945 fused_ordering(1016) 00:08:33.945 fused_ordering(1017) 00:08:33.945 fused_ordering(1018) 00:08:33.945 fused_ordering(1019) 00:08:33.945 fused_ordering(1020) 00:08:33.945 fused_ordering(1021) 00:08:33.945 fused_ordering(1022) 00:08:33.945 fused_ordering(1023) 00:08:33.945 16:55:49 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:33.945 16:55:49 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:33.945 16:55:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:33.945 16:55:49 -- nvmf/common.sh@117 -- # sync 00:08:33.945 16:55:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.945 16:55:49 -- nvmf/common.sh@120 -- # set +e 00:08:33.945 16:55:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.945 16:55:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.945 rmmod nvme_tcp 00:08:33.945 rmmod nvme_fabrics 00:08:33.945 rmmod nvme_keyring 00:08:33.945 16:55:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.945 16:55:49 -- nvmf/common.sh@124 -- # set -e 00:08:33.945 16:55:49 -- nvmf/common.sh@125 -- # return 0 00:08:33.945 16:55:49 -- nvmf/common.sh@478 -- # '[' -n 1624271 ']' 00:08:33.945 16:55:49 -- nvmf/common.sh@479 -- # killprocess 1624271 00:08:33.945 16:55:49 -- common/autotest_common.sh@936 -- # '[' -z 1624271 ']' 00:08:33.945 16:55:49 -- common/autotest_common.sh@940 -- # kill -0 1624271 00:08:33.945 16:55:49 -- common/autotest_common.sh@941 -- # uname 00:08:33.945 16:55:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:33.945 16:55:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1624271 00:08:34.205 16:55:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:34.205 16:55:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:34.205 16:55:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1624271' 00:08:34.205 killing process with pid 1624271 00:08:34.205 16:55:49 -- common/autotest_common.sh@955 -- # kill 1624271 00:08:34.205 16:55:49 -- common/autotest_common.sh@960 -- # wait 1624271 00:08:34.463 16:55:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:34.463 16:55:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:34.463 16:55:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:34.463 16:55:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.463 16:55:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.463 16:55:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.463 16:55:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.463 16:55:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.374 16:55:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.374 00:08:36.374 real 0m8.449s 00:08:36.374 user 0m5.675s 00:08:36.374 sys 0m3.688s 00:08:36.374 16:55:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:36.374 16:55:51 -- common/autotest_common.sh@10 -- # set +x 00:08:36.374 ************************************ 00:08:36.374 END TEST nvmf_fused_ordering 00:08:36.374 ************************************ 00:08:36.374 16:55:52 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:36.374 16:55:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:36.374 16:55:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.374 16:55:52 -- common/autotest_common.sh@10 -- # set +x 00:08:36.633 ************************************ 00:08:36.633 START TEST nvmf_delete_subsystem 00:08:36.633 ************************************ 00:08:36.633 16:55:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:36.633 * Looking for test storage... 00:08:36.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.633 16:55:52 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.633 16:55:52 -- nvmf/common.sh@7 -- # uname -s 00:08:36.633 16:55:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.633 16:55:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.633 16:55:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.633 16:55:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.633 16:55:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.633 16:55:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.633 16:55:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.633 16:55:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.633 16:55:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.633 16:55:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.633 16:55:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:36.633 16:55:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:36.633 16:55:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.633 16:55:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.633 16:55:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.633 16:55:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.633 16:55:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.633 16:55:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.633 16:55:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.633 16:55:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.633 16:55:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.633 16:55:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.633 16:55:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.633 16:55:52 -- paths/export.sh@5 -- # export PATH 00:08:36.633 16:55:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.633 16:55:52 -- nvmf/common.sh@47 -- # : 0 00:08:36.633 16:55:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.633 16:55:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.633 16:55:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.633 16:55:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.633 16:55:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.633 16:55:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.633 16:55:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.633 16:55:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.633 16:55:52 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:36.633 16:55:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:36.633 16:55:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.633 16:55:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:36.633 16:55:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:36.633 16:55:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:36.633 16:55:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.633 16:55:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.633 16:55:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.633 16:55:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:36.633 16:55:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:36.633 16:55:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.633 16:55:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.538 16:55:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:38.538 16:55:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.538 16:55:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.538 16:55:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.538 16:55:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.538 16:55:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.538 16:55:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.538 16:55:54 -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.538 16:55:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.538 16:55:54 -- nvmf/common.sh@296 -- # e810=() 00:08:38.538 16:55:54 -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.538 16:55:54 -- nvmf/common.sh@297 -- # x722=() 00:08:38.538 16:55:54 -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.538 16:55:54 -- nvmf/common.sh@298 -- # mlx=() 00:08:38.538 16:55:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.538 16:55:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.538 16:55:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.538 16:55:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.538 16:55:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.538 16:55:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.538 16:55:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.538 16:55:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.538 16:55:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.538 16:55:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.538 16:55:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.538 16:55:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.538 16:55:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.538 16:55:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.538 16:55:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.538 16:55:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.538 16:55:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:38.538 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:38.538 16:55:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.538 16:55:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:38.538 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:38.538 16:55:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.538 16:55:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.538 16:55:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.538 16:55:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:38.538 16:55:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.538 16:55:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:38.538 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:38.538 16:55:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.538 16:55:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.538 16:55:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.538 16:55:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:38.538 16:55:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.538 16:55:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:38.538 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:38.538 16:55:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.538 16:55:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:38.538 16:55:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:38.538 16:55:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:38.538 16:55:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:38.538 16:55:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.538 16:55:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.538 16:55:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.538 16:55:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.538 16:55:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.538 16:55:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.538 16:55:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.538 16:55:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.538 16:55:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.538 16:55:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.538 16:55:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.538 16:55:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.538 16:55:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.797 16:55:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.797 16:55:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.797 16:55:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.797 16:55:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.797 16:55:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.797 16:55:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.797 16:55:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:08:38.797 00:08:38.797 --- 10.0.0.2 ping statistics --- 00:08:38.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.797 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:08:38.797 16:55:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:08:38.797 00:08:38.797 --- 10.0.0.1 ping statistics --- 00:08:38.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.797 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:08:38.797 16:55:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.797 16:55:54 -- nvmf/common.sh@411 -- # return 0 00:08:38.797 16:55:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:38.797 16:55:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.797 16:55:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:38.797 16:55:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:38.797 16:55:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.797 16:55:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:38.797 16:55:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:38.797 16:55:54 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:38.797 16:55:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:38.797 16:55:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:38.797 16:55:54 -- common/autotest_common.sh@10 -- # set +x 00:08:38.797 16:55:54 -- nvmf/common.sh@470 -- # nvmfpid=1626638 00:08:38.797 16:55:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:38.797 16:55:54 -- nvmf/common.sh@471 -- # waitforlisten 1626638 00:08:38.797 16:55:54 -- common/autotest_common.sh@817 -- # '[' -z 1626638 ']' 00:08:38.797 16:55:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.797 16:55:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:38.797 16:55:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.797 16:55:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:38.797 16:55:54 -- common/autotest_common.sh@10 -- # set +x 00:08:38.797 [2024-04-18 16:55:54.387526] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:08:38.797 [2024-04-18 16:55:54.387607] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.797 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.797 [2024-04-18 16:55:54.455683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:39.055 [2024-04-18 16:55:54.574808] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.055 [2024-04-18 16:55:54.574870] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.055 [2024-04-18 16:55:54.574886] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.055 [2024-04-18 16:55:54.574900] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.055 [2024-04-18 16:55:54.574912] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.056 [2024-04-18 16:55:54.575000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.056 [2024-04-18 16:55:54.575006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.056 16:55:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:39.056 16:55:54 -- common/autotest_common.sh@850 -- # return 0 00:08:39.056 16:55:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:39.056 16:55:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:39.056 16:55:54 -- common/autotest_common.sh@10 -- # set +x 00:08:39.056 16:55:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.056 16:55:54 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.056 16:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:39.056 16:55:54 -- common/autotest_common.sh@10 -- # set +x 00:08:39.056 [2024-04-18 16:55:54.720829] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.056 16:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:39.056 16:55:54 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:39.056 16:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:39.056 16:55:54 -- common/autotest_common.sh@10 -- # set +x 00:08:39.056 16:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:39.056 16:55:54 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.056 16:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:39.056 16:55:54 -- common/autotest_common.sh@10 -- # set +x 00:08:39.056 [2024-04-18 16:55:54.737077] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.056 16:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:39.056 16:55:54 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:39.056 16:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:39.056 16:55:54 -- common/autotest_common.sh@10 -- # set +x 00:08:39.056 NULL1 00:08:39.056 16:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:39.056 16:55:54 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:39.056 16:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:39.056 16:55:54 -- common/autotest_common.sh@10 -- # set +x 00:08:39.056 Delay0 00:08:39.056 16:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:39.056 16:55:54 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.056 16:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:39.056 16:55:54 -- common/autotest_common.sh@10 -- # set +x 00:08:39.315 16:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:39.315 16:55:54 -- target/delete_subsystem.sh@28 -- # perf_pid=1626742 00:08:39.315 16:55:54 -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:39.315 16:55:54 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:39.315 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.316 [2024-04-18 16:55:54.811778] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:41.221 16:55:56 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.221 16:55:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.221 16:55:56 -- common/autotest_common.sh@10 -- # set +x 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 Read completed with error (sct=0, sc=8) 00:08:41.481 starting I/O failed: -6 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 Read completed with error (sct=0, sc=8) 00:08:41.481 Read completed with error (sct=0, sc=8) 00:08:41.481 starting I/O failed: -6 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 Read completed with error (sct=0, sc=8) 00:08:41.481 Read completed with error (sct=0, sc=8) 00:08:41.481 Read completed with error (sct=0, sc=8) 00:08:41.481 starting I/O failed: -6 00:08:41.481 Read completed with error (sct=0, sc=8) 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 Read completed with error (sct=0, sc=8) 00:08:41.481 Read completed with error (sct=0, sc=8) 00:08:41.481 starting I/O failed: -6 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 Read completed with error (sct=0, sc=8) 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 starting I/O failed: -6 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 Read completed with error (sct=0, sc=8) 00:08:41.481 starting I/O failed: -6 00:08:41.481 Write completed with error (sct=0, sc=8) 00:08:41.481 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 [2024-04-18 16:55:57.062563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183ad30 is same with the state(5) to be set 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Write completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 Read completed with error (sct=0, sc=8) 00:08:41.482 starting I/O failed: -6 00:08:41.483 Read completed with error (sct=0, sc=8) 00:08:41.483 starting I/O failed: -6 00:08:41.483 starting I/O failed: -6 00:08:41.483 starting I/O failed: -6 00:08:41.483 starting I/O failed: -6 00:08:42.419 [2024-04-18 16:55:58.031008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1859120 is same with the state(5) to be set 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 [2024-04-18 16:55:58.065349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183aba0 is same with the state(5) to be set 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 [2024-04-18 16:55:58.065624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f08a400bf90 is same with the state(5) to be set 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 [2024-04-18 16:55:58.065913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f08a400c690 is same with the state(5) to be set 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Write completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 Read completed with error (sct=0, sc=8) 00:08:42.419 [2024-04-18 16:55:58.066110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183a880 is same with the state(5) to be set 00:08:42.419 [2024-04-18 16:55:58.067001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1859120 (9): Bad file descriptor 00:08:42.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:42.419 16:55:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:42.419 16:55:58 -- target/delete_subsystem.sh@34 -- # delay=0 00:08:42.419 16:55:58 -- target/delete_subsystem.sh@35 -- # kill -0 1626742 00:08:42.419 16:55:58 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:42.420 Initializing NVMe Controllers 00:08:42.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:42.420 Controller IO queue size 128, less than required. 00:08:42.420 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:42.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:42.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:42.420 Initialization complete. Launching workers. 00:08:42.420 ======================================================== 00:08:42.420 Latency(us) 00:08:42.420 Device Information : IOPS MiB/s Average min max 00:08:42.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.34 0.08 906614.05 503.09 1043703.77 00:08:42.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 182.22 0.09 936959.44 537.25 2001615.98 00:08:42.420 ======================================================== 00:08:42.420 Total : 347.56 0.17 922523.71 503.09 2001615.98 00:08:42.420 00:08:43.028 16:55:58 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:43.028 16:55:58 -- target/delete_subsystem.sh@35 -- # kill -0 1626742 00:08:43.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1626742) - No such process 00:08:43.028 16:55:58 -- target/delete_subsystem.sh@45 -- # NOT wait 1626742 00:08:43.028 16:55:58 -- common/autotest_common.sh@638 -- # local es=0 00:08:43.028 16:55:58 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 1626742 00:08:43.028 16:55:58 -- common/autotest_common.sh@626 -- # local arg=wait 00:08:43.028 16:55:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:43.028 16:55:58 -- common/autotest_common.sh@630 -- # type -t wait 00:08:43.028 16:55:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:43.028 16:55:58 -- common/autotest_common.sh@641 -- # wait 1626742 00:08:43.028 16:55:58 -- common/autotest_common.sh@641 -- # es=1 00:08:43.028 16:55:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:43.028 16:55:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:43.028 16:55:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:43.028 16:55:58 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:43.028 16:55:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.028 16:55:58 -- common/autotest_common.sh@10 -- # set +x 00:08:43.028 16:55:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.028 16:55:58 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.028 16:55:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.028 16:55:58 -- common/autotest_common.sh@10 -- # set +x 00:08:43.028 [2024-04-18 16:55:58.591044] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.028 16:55:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.028 16:55:58 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.028 16:55:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.028 16:55:58 -- common/autotest_common.sh@10 -- # set +x 00:08:43.028 16:55:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.028 16:55:58 -- target/delete_subsystem.sh@54 -- # perf_pid=1627189 00:08:43.028 16:55:58 -- target/delete_subsystem.sh@56 -- # delay=0 00:08:43.028 16:55:58 -- target/delete_subsystem.sh@57 -- # kill -0 1627189 00:08:43.028 16:55:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.028 16:55:58 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:43.028 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.028 [2024-04-18 16:55:58.653757] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:43.599 16:55:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.599 16:55:59 -- target/delete_subsystem.sh@57 -- # kill -0 1627189 00:08:43.599 16:55:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:44.166 16:55:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:44.166 16:55:59 -- target/delete_subsystem.sh@57 -- # kill -0 1627189 00:08:44.166 16:55:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:44.426 16:56:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:44.426 16:56:00 -- target/delete_subsystem.sh@57 -- # kill -0 1627189 00:08:44.426 16:56:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:44.992 16:56:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:44.993 16:56:00 -- target/delete_subsystem.sh@57 -- # kill -0 1627189 00:08:44.993 16:56:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:45.559 16:56:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:45.559 16:56:01 -- target/delete_subsystem.sh@57 -- # kill -0 1627189 00:08:45.559 16:56:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.127 16:56:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.127 16:56:01 -- target/delete_subsystem.sh@57 -- # kill -0 1627189 00:08:46.127 16:56:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.386 Initializing NVMe Controllers 00:08:46.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:46.386 Controller IO queue size 128, less than required. 00:08:46.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:46.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:46.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:46.386 Initialization complete. Launching workers. 00:08:46.386 ======================================================== 00:08:46.386 Latency(us) 00:08:46.386 Device Information : IOPS MiB/s Average min max 00:08:46.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004694.91 1000192.03 1014783.05 00:08:46.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004555.51 1000209.37 1043079.45 00:08:46.387 ======================================================== 00:08:46.387 Total : 256.00 0.12 1004625.21 1000192.03 1043079.45 00:08:46.387 00:08:46.646 16:56:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.646 16:56:02 -- target/delete_subsystem.sh@57 -- # kill -0 1627189 00:08:46.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1627189) - No such process 00:08:46.646 16:56:02 -- target/delete_subsystem.sh@67 -- # wait 1627189 00:08:46.646 16:56:02 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:46.646 16:56:02 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:46.646 16:56:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:46.646 16:56:02 -- nvmf/common.sh@117 -- # sync 00:08:46.646 16:56:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.646 16:56:02 -- nvmf/common.sh@120 -- # set +e 00:08:46.646 16:56:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.646 16:56:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.647 rmmod nvme_tcp 00:08:46.647 rmmod nvme_fabrics 00:08:46.647 rmmod nvme_keyring 00:08:46.647 16:56:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.647 16:56:02 -- nvmf/common.sh@124 -- # set -e 00:08:46.647 16:56:02 -- nvmf/common.sh@125 -- # return 0 00:08:46.647 16:56:02 -- nvmf/common.sh@478 -- # '[' -n 1626638 ']' 00:08:46.647 16:56:02 -- nvmf/common.sh@479 -- # killprocess 1626638 00:08:46.647 16:56:02 -- common/autotest_common.sh@936 -- # '[' -z 1626638 ']' 00:08:46.647 16:56:02 -- common/autotest_common.sh@940 -- # kill -0 1626638 00:08:46.647 16:56:02 -- common/autotest_common.sh@941 -- # uname 00:08:46.647 16:56:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:46.647 16:56:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1626638 00:08:46.647 16:56:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:46.647 16:56:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:46.647 16:56:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1626638' 00:08:46.647 killing process with pid 1626638 00:08:46.647 16:56:02 -- common/autotest_common.sh@955 -- # kill 1626638 00:08:46.647 16:56:02 -- common/autotest_common.sh@960 -- # wait 1626638 00:08:46.905 16:56:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:46.905 16:56:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:46.905 16:56:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:46.905 16:56:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.905 16:56:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.905 16:56:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.905 16:56:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.905 16:56:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.814 16:56:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.814 00:08:48.814 real 0m12.401s 00:08:48.814 user 0m28.042s 00:08:48.814 sys 0m3.148s 00:08:48.814 16:56:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:48.814 16:56:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.814 ************************************ 00:08:48.814 END TEST nvmf_delete_subsystem 00:08:48.814 ************************************ 00:08:49.073 16:56:04 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:08:49.073 16:56:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:49.073 16:56:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.073 16:56:04 -- common/autotest_common.sh@10 -- # set +x 00:08:49.073 ************************************ 00:08:49.073 START TEST nvmf_ns_masking 00:08:49.073 ************************************ 00:08:49.073 16:56:04 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:08:49.073 * Looking for test storage... 00:08:49.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.073 16:56:04 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.073 16:56:04 -- nvmf/common.sh@7 -- # uname -s 00:08:49.073 16:56:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.073 16:56:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.073 16:56:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.073 16:56:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.073 16:56:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.073 16:56:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.073 16:56:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.073 16:56:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.073 16:56:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.073 16:56:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.073 16:56:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.073 16:56:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.073 16:56:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.073 16:56:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.073 16:56:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.073 16:56:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.073 16:56:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.073 16:56:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.073 16:56:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.073 16:56:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.073 16:56:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.073 16:56:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.073 16:56:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.073 16:56:04 -- paths/export.sh@5 -- # export PATH 00:08:49.073 16:56:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.073 16:56:04 -- nvmf/common.sh@47 -- # : 0 00:08:49.073 16:56:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.073 16:56:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.073 16:56:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.073 16:56:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.073 16:56:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.073 16:56:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.073 16:56:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.073 16:56:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.073 16:56:04 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:49.073 16:56:04 -- target/ns_masking.sh@11 -- # loops=5 00:08:49.073 16:56:04 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:08:49.073 16:56:04 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:08:49.073 16:56:04 -- target/ns_masking.sh@15 -- # uuidgen 00:08:49.073 16:56:04 -- target/ns_masking.sh@15 -- # HOSTID=5466a600-17bd-424f-83f3-19f31b5a134a 00:08:49.073 16:56:04 -- target/ns_masking.sh@44 -- # nvmftestinit 00:08:49.073 16:56:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:49.073 16:56:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.073 16:56:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:49.073 16:56:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:49.073 16:56:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:49.073 16:56:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.073 16:56:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.073 16:56:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.073 16:56:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:49.073 16:56:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:49.073 16:56:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.073 16:56:04 -- common/autotest_common.sh@10 -- # set +x 00:08:50.983 16:56:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:50.983 16:56:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.983 16:56:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.983 16:56:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.983 16:56:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.983 16:56:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.983 16:56:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.983 16:56:06 -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.983 16:56:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.983 16:56:06 -- nvmf/common.sh@296 -- # e810=() 00:08:50.983 16:56:06 -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.983 16:56:06 -- nvmf/common.sh@297 -- # x722=() 00:08:50.983 16:56:06 -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.983 16:56:06 -- nvmf/common.sh@298 -- # mlx=() 00:08:50.983 16:56:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.983 16:56:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.983 16:56:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.983 16:56:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.983 16:56:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.983 16:56:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.983 16:56:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.983 16:56:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.983 16:56:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.983 16:56:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.983 16:56:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.983 16:56:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.983 16:56:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.983 16:56:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:50.983 16:56:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.983 16:56:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.983 16:56:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:50.983 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:50.983 16:56:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.983 16:56:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:50.983 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:50.983 16:56:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.983 16:56:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.983 16:56:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.983 16:56:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:50.983 16:56:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.983 16:56:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:50.983 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:50.983 16:56:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.983 16:56:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.983 16:56:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.983 16:56:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:50.983 16:56:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.983 16:56:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:50.983 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:50.983 16:56:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.983 16:56:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:50.983 16:56:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:50.983 16:56:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:50.983 16:56:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:50.983 16:56:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.983 16:56:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.983 16:56:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.983 16:56:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:50.983 16:56:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.983 16:56:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.983 16:56:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:50.983 16:56:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.983 16:56:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.983 16:56:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:50.983 16:56:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:50.983 16:56:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.983 16:56:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.243 16:56:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.243 16:56:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.243 16:56:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.243 16:56:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.243 16:56:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.243 16:56:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.243 16:56:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:08:51.243 00:08:51.243 --- 10.0.0.2 ping statistics --- 00:08:51.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.243 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:08:51.243 16:56:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:08:51.243 00:08:51.243 --- 10.0.0.1 ping statistics --- 00:08:51.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.243 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:08:51.243 16:56:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.243 16:56:06 -- nvmf/common.sh@411 -- # return 0 00:08:51.243 16:56:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:51.243 16:56:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.243 16:56:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:51.243 16:56:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:51.243 16:56:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.243 16:56:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:51.243 16:56:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:51.243 16:56:06 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:08:51.243 16:56:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:51.243 16:56:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:51.243 16:56:06 -- common/autotest_common.sh@10 -- # set +x 00:08:51.243 16:56:06 -- nvmf/common.sh@470 -- # nvmfpid=1629539 00:08:51.243 16:56:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.243 16:56:06 -- nvmf/common.sh@471 -- # waitforlisten 1629539 00:08:51.243 16:56:06 -- common/autotest_common.sh@817 -- # '[' -z 1629539 ']' 00:08:51.243 16:56:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.243 16:56:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:51.243 16:56:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.243 16:56:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:51.243 16:56:06 -- common/autotest_common.sh@10 -- # set +x 00:08:51.243 [2024-04-18 16:56:06.828539] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:08:51.243 [2024-04-18 16:56:06.828621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.243 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.243 [2024-04-18 16:56:06.892578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.503 [2024-04-18 16:56:07.001721] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.503 [2024-04-18 16:56:07.001776] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.503 [2024-04-18 16:56:07.001799] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.503 [2024-04-18 16:56:07.001817] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.503 [2024-04-18 16:56:07.001833] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.503 [2024-04-18 16:56:07.001923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.503 [2024-04-18 16:56:07.001984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.503 [2024-04-18 16:56:07.002027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.503 [2024-04-18 16:56:07.002032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.503 16:56:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:51.503 16:56:07 -- common/autotest_common.sh@850 -- # return 0 00:08:51.503 16:56:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:51.503 16:56:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:51.503 16:56:07 -- common/autotest_common.sh@10 -- # set +x 00:08:51.503 16:56:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.503 16:56:07 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:51.762 [2024-04-18 16:56:07.386018] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.762 16:56:07 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:08:51.762 16:56:07 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:08:51.762 16:56:07 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:08:52.022 Malloc1 00:08:52.022 16:56:07 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:08:52.281 Malloc2 00:08:52.281 16:56:07 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:52.540 16:56:08 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:08:52.799 16:56:08 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.057 [2024-04-18 16:56:08.653947] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.057 16:56:08 -- target/ns_masking.sh@61 -- # connect 00:08:53.057 16:56:08 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5466a600-17bd-424f-83f3-19f31b5a134a -a 10.0.0.2 -s 4420 -i 4 00:08:53.317 16:56:08 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:08:53.317 16:56:08 -- common/autotest_common.sh@1184 -- # local i=0 00:08:53.317 16:56:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:53.317 16:56:08 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:53.317 16:56:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:55.228 16:56:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:55.228 16:56:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:55.228 16:56:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:55.228 16:56:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:55.228 16:56:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:55.228 16:56:10 -- common/autotest_common.sh@1194 -- # return 0 00:08:55.228 16:56:10 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:08:55.228 16:56:10 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:55.228 16:56:10 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:08:55.228 16:56:10 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:08:55.228 16:56:10 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:08:55.228 16:56:10 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:55.228 16:56:10 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:55.487 [ 0]:0x1 00:08:55.487 16:56:10 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:55.487 16:56:10 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:55.487 16:56:11 -- target/ns_masking.sh@40 -- # nguid=48fc41f7f53945d89101bbcba2bdde70 00:08:55.487 16:56:11 -- target/ns_masking.sh@41 -- # [[ 48fc41f7f53945d89101bbcba2bdde70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:55.487 16:56:11 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:08:55.746 16:56:11 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:08:55.746 16:56:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:55.746 16:56:11 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:55.746 [ 0]:0x1 00:08:55.746 16:56:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:55.746 16:56:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:55.746 16:56:11 -- target/ns_masking.sh@40 -- # nguid=48fc41f7f53945d89101bbcba2bdde70 00:08:55.746 16:56:11 -- target/ns_masking.sh@41 -- # [[ 48fc41f7f53945d89101bbcba2bdde70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:55.746 16:56:11 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:08:55.746 16:56:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:55.746 16:56:11 -- target/ns_masking.sh@39 -- # grep 0x2 00:08:55.746 [ 1]:0x2 00:08:55.746 16:56:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:55.746 16:56:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:55.746 16:56:11 -- target/ns_masking.sh@40 -- # nguid=28985c19c73a4addb49c665a53a51f8a 00:08:55.746 16:56:11 -- target/ns_masking.sh@41 -- # [[ 28985c19c73a4addb49c665a53a51f8a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:55.746 16:56:11 -- target/ns_masking.sh@69 -- # disconnect 00:08:55.746 16:56:11 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:55.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.746 16:56:11 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.004 16:56:11 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:08:56.264 16:56:11 -- target/ns_masking.sh@77 -- # connect 1 00:08:56.264 16:56:11 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5466a600-17bd-424f-83f3-19f31b5a134a -a 10.0.0.2 -s 4420 -i 4 00:08:56.524 16:56:12 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:08:56.524 16:56:12 -- common/autotest_common.sh@1184 -- # local i=0 00:08:56.524 16:56:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.524 16:56:12 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:08:56.524 16:56:12 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:08:56.524 16:56:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:58.429 16:56:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:58.430 16:56:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:58.430 16:56:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:58.430 16:56:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:58.430 16:56:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:58.430 16:56:14 -- common/autotest_common.sh@1194 -- # return 0 00:08:58.430 16:56:14 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:08:58.430 16:56:14 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:58.430 16:56:14 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:08:58.430 16:56:14 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:08:58.430 16:56:14 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:08:58.430 16:56:14 -- common/autotest_common.sh@638 -- # local es=0 00:08:58.430 16:56:14 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:08:58.430 16:56:14 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:08:58.430 16:56:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:58.430 16:56:14 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:08:58.430 16:56:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:58.430 16:56:14 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:08:58.430 16:56:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:58.430 16:56:14 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:58.687 16:56:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:58.687 16:56:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:58.687 16:56:14 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:08:58.687 16:56:14 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:58.687 16:56:14 -- common/autotest_common.sh@641 -- # es=1 00:08:58.687 16:56:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:58.687 16:56:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:58.687 16:56:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:58.687 16:56:14 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:08:58.687 16:56:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:58.687 16:56:14 -- target/ns_masking.sh@39 -- # grep 0x2 00:08:58.687 [ 0]:0x2 00:08:58.687 16:56:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:58.687 16:56:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:58.687 16:56:14 -- target/ns_masking.sh@40 -- # nguid=28985c19c73a4addb49c665a53a51f8a 00:08:58.687 16:56:14 -- target/ns_masking.sh@41 -- # [[ 28985c19c73a4addb49c665a53a51f8a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:58.687 16:56:14 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:58.945 16:56:14 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:08:58.945 16:56:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:58.945 16:56:14 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:58.945 [ 0]:0x1 00:08:58.945 16:56:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:58.945 16:56:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:58.945 16:56:14 -- target/ns_masking.sh@40 -- # nguid=48fc41f7f53945d89101bbcba2bdde70 00:08:58.945 16:56:14 -- target/ns_masking.sh@41 -- # [[ 48fc41f7f53945d89101bbcba2bdde70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:58.945 16:56:14 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:08:58.945 16:56:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:58.945 16:56:14 -- target/ns_masking.sh@39 -- # grep 0x2 00:08:58.945 [ 1]:0x2 00:08:58.945 16:56:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:58.945 16:56:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:58.945 16:56:14 -- target/ns_masking.sh@40 -- # nguid=28985c19c73a4addb49c665a53a51f8a 00:08:58.945 16:56:14 -- target/ns_masking.sh@41 -- # [[ 28985c19c73a4addb49c665a53a51f8a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:58.945 16:56:14 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:59.203 16:56:14 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:08:59.203 16:56:14 -- common/autotest_common.sh@638 -- # local es=0 00:08:59.203 16:56:14 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:08:59.203 16:56:14 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:08:59.203 16:56:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:59.203 16:56:14 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:08:59.203 16:56:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:59.203 16:56:14 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:08:59.203 16:56:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:59.203 16:56:14 -- target/ns_masking.sh@39 -- # grep 0x1 00:08:59.203 16:56:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:59.203 16:56:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:59.461 16:56:14 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:08:59.461 16:56:14 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.461 16:56:14 -- common/autotest_common.sh@641 -- # es=1 00:08:59.461 16:56:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:59.461 16:56:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:59.461 16:56:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:59.461 16:56:14 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:08:59.461 16:56:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:08:59.461 16:56:14 -- target/ns_masking.sh@39 -- # grep 0x2 00:08:59.461 [ 0]:0x2 00:08:59.461 16:56:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:59.461 16:56:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:08:59.461 16:56:14 -- target/ns_masking.sh@40 -- # nguid=28985c19c73a4addb49c665a53a51f8a 00:08:59.461 16:56:14 -- target/ns_masking.sh@41 -- # [[ 28985c19c73a4addb49c665a53a51f8a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.461 16:56:14 -- target/ns_masking.sh@91 -- # disconnect 00:08:59.461 16:56:14 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.461 16:56:14 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:59.719 16:56:15 -- target/ns_masking.sh@95 -- # connect 2 00:08:59.719 16:56:15 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5466a600-17bd-424f-83f3-19f31b5a134a -a 10.0.0.2 -s 4420 -i 4 00:08:59.719 16:56:15 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:08:59.719 16:56:15 -- common/autotest_common.sh@1184 -- # local i=0 00:08:59.719 16:56:15 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:59.719 16:56:15 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:08:59.719 16:56:15 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:08:59.719 16:56:15 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:01.678 16:56:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:01.678 16:56:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:01.678 16:56:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:01.678 16:56:17 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:01.678 16:56:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:01.678 16:56:17 -- common/autotest_common.sh@1194 -- # return 0 00:09:01.678 16:56:17 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:01.678 16:56:17 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:01.937 16:56:17 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:01.937 16:56:17 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:01.937 16:56:17 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:01.937 16:56:17 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:01.937 16:56:17 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:01.937 [ 0]:0x1 00:09:01.937 16:56:17 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:01.937 16:56:17 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:01.937 16:56:17 -- target/ns_masking.sh@40 -- # nguid=48fc41f7f53945d89101bbcba2bdde70 00:09:01.937 16:56:17 -- target/ns_masking.sh@41 -- # [[ 48fc41f7f53945d89101bbcba2bdde70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:01.937 16:56:17 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:01.937 16:56:17 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:01.937 16:56:17 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:01.937 [ 1]:0x2 00:09:01.937 16:56:17 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:01.937 16:56:17 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:01.937 16:56:17 -- target/ns_masking.sh@40 -- # nguid=28985c19c73a4addb49c665a53a51f8a 00:09:01.937 16:56:17 -- target/ns_masking.sh@41 -- # [[ 28985c19c73a4addb49c665a53a51f8a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:01.937 16:56:17 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:02.196 16:56:17 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:02.196 16:56:17 -- common/autotest_common.sh@638 -- # local es=0 00:09:02.196 16:56:17 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:02.196 16:56:17 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:02.196 16:56:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:02.196 16:56:17 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:02.196 16:56:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:02.196 16:56:17 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:02.196 16:56:17 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:02.196 16:56:17 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:02.196 16:56:17 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:02.196 16:56:17 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:02.196 16:56:17 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:02.196 16:56:17 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:02.196 16:56:17 -- common/autotest_common.sh@641 -- # es=1 00:09:02.196 16:56:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:02.196 16:56:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:02.196 16:56:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:02.196 16:56:17 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:02.196 16:56:17 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:02.196 16:56:17 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:02.196 [ 0]:0x2 00:09:02.196 16:56:17 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:02.196 16:56:17 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:02.196 16:56:17 -- target/ns_masking.sh@40 -- # nguid=28985c19c73a4addb49c665a53a51f8a 00:09:02.196 16:56:17 -- target/ns_masking.sh@41 -- # [[ 28985c19c73a4addb49c665a53a51f8a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:02.196 16:56:17 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:02.196 16:56:17 -- common/autotest_common.sh@638 -- # local es=0 00:09:02.196 16:56:17 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:02.196 16:56:17 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.196 16:56:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:02.196 16:56:17 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.196 16:56:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:02.196 16:56:17 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.196 16:56:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:02.196 16:56:17 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.196 16:56:17 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:02.196 16:56:17 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:02.454 [2024-04-18 16:56:18.016567] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:02.454 request: 00:09:02.454 { 00:09:02.454 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:02.454 "nsid": 2, 00:09:02.454 "host": "nqn.2016-06.io.spdk:host1", 00:09:02.454 "method": "nvmf_ns_remove_host", 00:09:02.454 "req_id": 1 00:09:02.454 } 00:09:02.454 Got JSON-RPC error response 00:09:02.454 response: 00:09:02.454 { 00:09:02.454 "code": -32602, 00:09:02.455 "message": "Invalid parameters" 00:09:02.455 } 00:09:02.455 16:56:18 -- common/autotest_common.sh@641 -- # es=1 00:09:02.455 16:56:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:02.455 16:56:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:02.455 16:56:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:02.455 16:56:18 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:02.455 16:56:18 -- common/autotest_common.sh@638 -- # local es=0 00:09:02.455 16:56:18 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:02.455 16:56:18 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:02.455 16:56:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:02.455 16:56:18 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:02.455 16:56:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:02.455 16:56:18 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:02.455 16:56:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:02.455 16:56:18 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:02.455 16:56:18 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:02.455 16:56:18 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:02.455 16:56:18 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:02.455 16:56:18 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:02.455 16:56:18 -- common/autotest_common.sh@641 -- # es=1 00:09:02.455 16:56:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:02.455 16:56:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:02.455 16:56:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:02.455 16:56:18 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:02.455 16:56:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:02.455 16:56:18 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:02.455 [ 0]:0x2 00:09:02.455 16:56:18 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:02.455 16:56:18 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:02.713 16:56:18 -- target/ns_masking.sh@40 -- # nguid=28985c19c73a4addb49c665a53a51f8a 00:09:02.713 16:56:18 -- target/ns_masking.sh@41 -- # [[ 28985c19c73a4addb49c665a53a51f8a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:02.713 16:56:18 -- target/ns_masking.sh@108 -- # disconnect 00:09:02.713 16:56:18 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:02.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.713 16:56:18 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.972 16:56:18 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:02.972 16:56:18 -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:02.973 16:56:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:02.973 16:56:18 -- nvmf/common.sh@117 -- # sync 00:09:02.973 16:56:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:02.973 16:56:18 -- nvmf/common.sh@120 -- # set +e 00:09:02.973 16:56:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:02.973 16:56:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:02.973 rmmod nvme_tcp 00:09:02.973 rmmod nvme_fabrics 00:09:02.973 rmmod nvme_keyring 00:09:02.973 16:56:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:02.973 16:56:18 -- nvmf/common.sh@124 -- # set -e 00:09:02.973 16:56:18 -- nvmf/common.sh@125 -- # return 0 00:09:02.973 16:56:18 -- nvmf/common.sh@478 -- # '[' -n 1629539 ']' 00:09:02.973 16:56:18 -- nvmf/common.sh@479 -- # killprocess 1629539 00:09:02.973 16:56:18 -- common/autotest_common.sh@936 -- # '[' -z 1629539 ']' 00:09:02.973 16:56:18 -- common/autotest_common.sh@940 -- # kill -0 1629539 00:09:02.973 16:56:18 -- common/autotest_common.sh@941 -- # uname 00:09:02.973 16:56:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:02.973 16:56:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1629539 00:09:02.973 16:56:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:02.973 16:56:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:02.973 16:56:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1629539' 00:09:02.973 killing process with pid 1629539 00:09:02.973 16:56:18 -- common/autotest_common.sh@955 -- # kill 1629539 00:09:02.973 16:56:18 -- common/autotest_common.sh@960 -- # wait 1629539 00:09:03.543 16:56:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:03.543 16:56:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:03.543 16:56:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:03.543 16:56:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.543 16:56:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.543 16:56:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.543 16:56:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.543 16:56:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.456 16:56:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:05.457 00:09:05.457 real 0m16.392s 00:09:05.457 user 0m51.011s 00:09:05.457 sys 0m3.549s 00:09:05.457 16:56:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:05.457 16:56:21 -- common/autotest_common.sh@10 -- # set +x 00:09:05.457 ************************************ 00:09:05.457 END TEST nvmf_ns_masking 00:09:05.457 ************************************ 00:09:05.457 16:56:21 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:05.457 16:56:21 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:05.457 16:56:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:05.457 16:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.457 16:56:21 -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 ************************************ 00:09:05.716 START TEST nvmf_nvme_cli 00:09:05.716 ************************************ 00:09:05.716 16:56:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:05.716 * Looking for test storage... 00:09:05.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.716 16:56:21 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.716 16:56:21 -- nvmf/common.sh@7 -- # uname -s 00:09:05.716 16:56:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.716 16:56:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.716 16:56:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.716 16:56:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.716 16:56:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.716 16:56:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.716 16:56:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.716 16:56:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.716 16:56:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.716 16:56:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.716 16:56:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.716 16:56:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.716 16:56:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.716 16:56:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.716 16:56:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.716 16:56:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.716 16:56:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.716 16:56:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.716 16:56:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.716 16:56:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.716 16:56:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.716 16:56:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.716 16:56:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.716 16:56:21 -- paths/export.sh@5 -- # export PATH 00:09:05.716 16:56:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.716 16:56:21 -- nvmf/common.sh@47 -- # : 0 00:09:05.716 16:56:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:05.716 16:56:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:05.716 16:56:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.716 16:56:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.716 16:56:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.716 16:56:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:05.716 16:56:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:05.716 16:56:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:05.716 16:56:21 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.716 16:56:21 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.716 16:56:21 -- target/nvme_cli.sh@14 -- # devs=() 00:09:05.716 16:56:21 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:05.716 16:56:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:05.716 16:56:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.716 16:56:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:05.716 16:56:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:05.716 16:56:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:05.716 16:56:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.716 16:56:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.716 16:56:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.716 16:56:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:05.716 16:56:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:05.716 16:56:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:05.716 16:56:21 -- common/autotest_common.sh@10 -- # set +x 00:09:07.635 16:56:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:07.635 16:56:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:07.635 16:56:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:07.635 16:56:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:07.635 16:56:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:07.635 16:56:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:07.635 16:56:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:07.635 16:56:23 -- nvmf/common.sh@295 -- # net_devs=() 00:09:07.635 16:56:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:07.635 16:56:23 -- nvmf/common.sh@296 -- # e810=() 00:09:07.635 16:56:23 -- nvmf/common.sh@296 -- # local -ga e810 00:09:07.635 16:56:23 -- nvmf/common.sh@297 -- # x722=() 00:09:07.635 16:56:23 -- nvmf/common.sh@297 -- # local -ga x722 00:09:07.635 16:56:23 -- nvmf/common.sh@298 -- # mlx=() 00:09:07.635 16:56:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:07.635 16:56:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.635 16:56:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.635 16:56:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.635 16:56:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.635 16:56:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.635 16:56:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.635 16:56:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.635 16:56:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.635 16:56:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.635 16:56:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.635 16:56:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.635 16:56:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:07.635 16:56:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:07.635 16:56:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:07.635 16:56:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.635 16:56:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:07.635 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:07.635 16:56:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.635 16:56:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:07.635 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:07.635 16:56:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:07.635 16:56:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:07.635 16:56:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.635 16:56:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.635 16:56:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:07.635 16:56:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.635 16:56:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:07.635 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:07.635 16:56:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.635 16:56:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.635 16:56:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.635 16:56:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:07.635 16:56:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.635 16:56:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:07.635 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:07.635 16:56:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.635 16:56:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:07.635 16:56:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:07.636 16:56:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:07.636 16:56:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:07.636 16:56:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:07.636 16:56:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.636 16:56:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.636 16:56:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.636 16:56:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:07.636 16:56:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.636 16:56:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.636 16:56:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:07.636 16:56:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.636 16:56:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.636 16:56:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:07.636 16:56:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:07.636 16:56:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.636 16:56:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.636 16:56:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.636 16:56:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.636 16:56:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:07.636 16:56:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.636 16:56:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.636 16:56:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.636 16:56:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:07.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:09:07.636 00:09:07.636 --- 10.0.0.2 ping statistics --- 00:09:07.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.636 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:09:07.636 16:56:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:09:07.636 00:09:07.636 --- 10.0.0.1 ping statistics --- 00:09:07.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.636 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:09:07.636 16:56:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.636 16:56:23 -- nvmf/common.sh@411 -- # return 0 00:09:07.636 16:56:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:07.636 16:56:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.636 16:56:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:07.636 16:56:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:07.636 16:56:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.636 16:56:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:07.636 16:56:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:07.636 16:56:23 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:07.636 16:56:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:07.636 16:56:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:07.636 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:09:07.636 16:56:23 -- nvmf/common.sh@470 -- # nvmfpid=1633095 00:09:07.636 16:56:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:07.636 16:56:23 -- nvmf/common.sh@471 -- # waitforlisten 1633095 00:09:07.636 16:56:23 -- common/autotest_common.sh@817 -- # '[' -z 1633095 ']' 00:09:07.636 16:56:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.636 16:56:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:07.636 16:56:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.636 16:56:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:07.636 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:09:07.898 [2024-04-18 16:56:23.340852] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:09:07.898 [2024-04-18 16:56:23.340944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.898 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.898 [2024-04-18 16:56:23.405222] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.898 [2024-04-18 16:56:23.514778] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.898 [2024-04-18 16:56:23.514839] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.898 [2024-04-18 16:56:23.514862] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.898 [2024-04-18 16:56:23.514882] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.898 [2024-04-18 16:56:23.514912] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.898 [2024-04-18 16:56:23.515004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.898 [2024-04-18 16:56:23.515068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.898 [2024-04-18 16:56:23.515114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.898 [2024-04-18 16:56:23.515119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.159 16:56:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:08.159 16:56:23 -- common/autotest_common.sh@850 -- # return 0 00:09:08.159 16:56:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:08.159 16:56:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:08.159 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:09:08.159 16:56:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.159 16:56:23 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.159 16:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.159 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:09:08.159 [2024-04-18 16:56:23.671307] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.159 16:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.159 16:56:23 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:08.159 16:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.159 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:09:08.159 Malloc0 00:09:08.159 16:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.159 16:56:23 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:08.159 16:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.159 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:09:08.159 Malloc1 00:09:08.159 16:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.159 16:56:23 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:08.159 16:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.159 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:09:08.159 16:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.159 16:56:23 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.159 16:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.159 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:09:08.159 16:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.159 16:56:23 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:08.159 16:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.159 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:09:08.159 16:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.159 16:56:23 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.159 16:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.159 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:09:08.159 [2024-04-18 16:56:23.757006] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.159 16:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.159 16:56:23 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:08.159 16:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.159 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:09:08.159 16:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.159 16:56:23 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:08.159 00:09:08.159 Discovery Log Number of Records 2, Generation counter 2 00:09:08.159 =====Discovery Log Entry 0====== 00:09:08.159 trtype: tcp 00:09:08.159 adrfam: ipv4 00:09:08.159 subtype: current discovery subsystem 00:09:08.159 treq: not required 00:09:08.159 portid: 0 00:09:08.159 trsvcid: 4420 00:09:08.159 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:08.159 traddr: 10.0.0.2 00:09:08.160 eflags: explicit discovery connections, duplicate discovery information 00:09:08.160 sectype: none 00:09:08.160 =====Discovery Log Entry 1====== 00:09:08.160 trtype: tcp 00:09:08.160 adrfam: ipv4 00:09:08.160 subtype: nvme subsystem 00:09:08.160 treq: not required 00:09:08.160 portid: 0 00:09:08.160 trsvcid: 4420 00:09:08.160 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:08.160 traddr: 10.0.0.2 00:09:08.160 eflags: none 00:09:08.160 sectype: none 00:09:08.418 16:56:23 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:08.418 16:56:23 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:08.418 16:56:23 -- nvmf/common.sh@511 -- # local dev _ 00:09:08.419 16:56:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.419 16:56:23 -- nvmf/common.sh@510 -- # nvme list 00:09:08.419 16:56:23 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:08.419 16:56:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.419 16:56:23 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:08.419 16:56:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:08.419 16:56:23 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:08.419 16:56:23 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:08.986 16:56:24 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:08.986 16:56:24 -- common/autotest_common.sh@1184 -- # local i=0 00:09:08.986 16:56:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.986 16:56:24 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:08.986 16:56:24 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:08.986 16:56:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:10.895 16:56:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:10.895 16:56:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:10.895 16:56:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.895 16:56:26 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:10.895 16:56:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.895 16:56:26 -- common/autotest_common.sh@1194 -- # return 0 00:09:10.895 16:56:26 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:10.895 16:56:26 -- nvmf/common.sh@511 -- # local dev _ 00:09:10.895 16:56:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:10.896 16:56:26 -- nvmf/common.sh@510 -- # nvme list 00:09:11.155 16:56:26 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:11.155 16:56:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:11.155 16:56:26 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:11.155 16:56:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:11.155 16:56:26 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:11.155 16:56:26 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:11.155 16:56:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:11.155 16:56:26 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:11.155 16:56:26 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:11.155 16:56:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:11.155 16:56:26 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:11.155 /dev/nvme0n1 ]] 00:09:11.155 16:56:26 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:11.155 16:56:26 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:11.155 16:56:26 -- nvmf/common.sh@511 -- # local dev _ 00:09:11.155 16:56:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:11.155 16:56:26 -- nvmf/common.sh@510 -- # nvme list 00:09:11.155 16:56:26 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:11.155 16:56:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:11.155 16:56:26 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:11.155 16:56:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:11.155 16:56:26 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:11.155 16:56:26 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:11.155 16:56:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:11.155 16:56:26 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:11.155 16:56:26 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:11.155 16:56:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:11.155 16:56:26 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:11.155 16:56:26 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.414 16:56:27 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.414 16:56:27 -- common/autotest_common.sh@1205 -- # local i=0 00:09:11.414 16:56:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:11.414 16:56:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.414 16:56:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:11.414 16:56:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.414 16:56:27 -- common/autotest_common.sh@1217 -- # return 0 00:09:11.414 16:56:27 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:11.414 16:56:27 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.414 16:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.414 16:56:27 -- common/autotest_common.sh@10 -- # set +x 00:09:11.414 16:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.414 16:56:27 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:11.414 16:56:27 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:11.414 16:56:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:11.414 16:56:27 -- nvmf/common.sh@117 -- # sync 00:09:11.414 16:56:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.414 16:56:27 -- nvmf/common.sh@120 -- # set +e 00:09:11.414 16:56:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.414 16:56:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.414 rmmod nvme_tcp 00:09:11.414 rmmod nvme_fabrics 00:09:11.414 rmmod nvme_keyring 00:09:11.414 16:56:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.414 16:56:27 -- nvmf/common.sh@124 -- # set -e 00:09:11.414 16:56:27 -- nvmf/common.sh@125 -- # return 0 00:09:11.414 16:56:27 -- nvmf/common.sh@478 -- # '[' -n 1633095 ']' 00:09:11.414 16:56:27 -- nvmf/common.sh@479 -- # killprocess 1633095 00:09:11.414 16:56:27 -- common/autotest_common.sh@936 -- # '[' -z 1633095 ']' 00:09:11.414 16:56:27 -- common/autotest_common.sh@940 -- # kill -0 1633095 00:09:11.414 16:56:27 -- common/autotest_common.sh@941 -- # uname 00:09:11.414 16:56:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:11.414 16:56:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1633095 00:09:11.674 16:56:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:11.674 16:56:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:11.674 16:56:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1633095' 00:09:11.674 killing process with pid 1633095 00:09:11.674 16:56:27 -- common/autotest_common.sh@955 -- # kill 1633095 00:09:11.674 16:56:27 -- common/autotest_common.sh@960 -- # wait 1633095 00:09:11.934 16:56:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:11.934 16:56:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:11.934 16:56:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:11.934 16:56:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.934 16:56:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.934 16:56:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.934 16:56:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.934 16:56:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.847 16:56:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:13.847 00:09:13.847 real 0m8.354s 00:09:13.847 user 0m16.022s 00:09:13.847 sys 0m2.075s 00:09:13.847 16:56:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:13.847 16:56:29 -- common/autotest_common.sh@10 -- # set +x 00:09:13.847 ************************************ 00:09:13.847 END TEST nvmf_nvme_cli 00:09:13.847 ************************************ 00:09:13.847 16:56:29 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:13.847 16:56:29 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:13.847 16:56:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:13.847 16:56:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.847 16:56:29 -- common/autotest_common.sh@10 -- # set +x 00:09:14.106 ************************************ 00:09:14.106 START TEST nvmf_vfio_user 00:09:14.106 ************************************ 00:09:14.106 16:56:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:14.106 * Looking for test storage... 00:09:14.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.106 16:56:29 -- nvmf/common.sh@7 -- # uname -s 00:09:14.106 16:56:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.106 16:56:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.106 16:56:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.106 16:56:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.106 16:56:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.106 16:56:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.106 16:56:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.106 16:56:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.106 16:56:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.106 16:56:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.106 16:56:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:14.106 16:56:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:14.106 16:56:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.106 16:56:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.106 16:56:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.106 16:56:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.106 16:56:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.106 16:56:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.106 16:56:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.106 16:56:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.106 16:56:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.106 16:56:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.106 16:56:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.106 16:56:29 -- paths/export.sh@5 -- # export PATH 00:09:14.106 16:56:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.106 16:56:29 -- nvmf/common.sh@47 -- # : 0 00:09:14.106 16:56:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.106 16:56:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.106 16:56:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.106 16:56:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.106 16:56:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.106 16:56:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.106 16:56:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.106 16:56:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1633910 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1633910' 00:09:14.106 Process pid: 1633910 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:14.106 16:56:29 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1633910 00:09:14.106 16:56:29 -- common/autotest_common.sh@817 -- # '[' -z 1633910 ']' 00:09:14.106 16:56:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.106 16:56:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:14.106 16:56:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.106 16:56:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:14.106 16:56:29 -- common/autotest_common.sh@10 -- # set +x 00:09:14.106 [2024-04-18 16:56:29.754099] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:09:14.106 [2024-04-18 16:56:29.754182] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.106 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.365 [2024-04-18 16:56:29.820407] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.365 [2024-04-18 16:56:29.942371] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.365 [2024-04-18 16:56:29.942437] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.365 [2024-04-18 16:56:29.942464] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.365 [2024-04-18 16:56:29.942484] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.365 [2024-04-18 16:56:29.942502] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.365 [2024-04-18 16:56:29.942611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.365 [2024-04-18 16:56:29.942670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.365 [2024-04-18 16:56:29.942699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.365 [2024-04-18 16:56:29.942707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.303 16:56:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:15.303 16:56:30 -- common/autotest_common.sh@850 -- # return 0 00:09:15.303 16:56:30 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:16.243 16:56:31 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:16.501 16:56:31 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:16.501 16:56:31 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:16.501 16:56:31 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:16.501 16:56:31 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:16.501 16:56:31 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:16.760 Malloc1 00:09:16.760 16:56:32 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:17.018 16:56:32 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:17.277 16:56:32 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:17.535 16:56:32 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:17.535 16:56:32 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:17.535 16:56:32 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:17.794 Malloc2 00:09:17.794 16:56:33 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:18.055 16:56:33 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:18.055 16:56:33 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:18.321 16:56:34 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:18.321 16:56:34 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:18.321 16:56:34 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:18.321 16:56:34 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:18.321 16:56:34 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:18.321 16:56:34 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:18.609 [2024-04-18 16:56:34.027887] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:09:18.609 [2024-04-18 16:56:34.027944] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1634466 ] 00:09:18.609 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.609 [2024-04-18 16:56:34.061769] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:18.609 [2024-04-18 16:56:34.069817] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:18.609 [2024-04-18 16:56:34.069846] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdffb0bf000 00:09:18.609 [2024-04-18 16:56:34.072391] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:18.609 [2024-04-18 16:56:34.072805] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:18.609 [2024-04-18 16:56:34.073810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:18.609 [2024-04-18 16:56:34.074812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:18.609 [2024-04-18 16:56:34.075817] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:18.609 [2024-04-18 16:56:34.076821] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:18.609 [2024-04-18 16:56:34.077829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:18.609 [2024-04-18 16:56:34.078832] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:18.609 [2024-04-18 16:56:34.079839] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:18.609 [2024-04-18 16:56:34.079863] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdffb0b4000 00:09:18.609 [2024-04-18 16:56:34.081007] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:18.609 [2024-04-18 16:56:34.099636] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:18.609 [2024-04-18 16:56:34.099672] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:18.609 [2024-04-18 16:56:34.101980] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:18.609 [2024-04-18 16:56:34.102032] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:18.609 [2024-04-18 16:56:34.102123] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:18.609 [2024-04-18 16:56:34.102152] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:18.609 [2024-04-18 16:56:34.102163] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:18.609 [2024-04-18 16:56:34.102965] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:18.609 [2024-04-18 16:56:34.102984] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:18.609 [2024-04-18 16:56:34.102996] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:18.609 [2024-04-18 16:56:34.103969] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:18.609 [2024-04-18 16:56:34.103989] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:18.609 [2024-04-18 16:56:34.104002] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:18.609 [2024-04-18 16:56:34.104979] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:18.610 [2024-04-18 16:56:34.104999] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:18.610 [2024-04-18 16:56:34.105984] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:18.610 [2024-04-18 16:56:34.106002] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:18.610 [2024-04-18 16:56:34.106011] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:18.610 [2024-04-18 16:56:34.106022] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:18.610 [2024-04-18 16:56:34.106131] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:18.610 [2024-04-18 16:56:34.106139] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:18.610 [2024-04-18 16:56:34.106147] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:18.610 [2024-04-18 16:56:34.106992] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:18.610 [2024-04-18 16:56:34.107996] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:18.610 [2024-04-18 16:56:34.108998] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:18.610 [2024-04-18 16:56:34.109996] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:18.610 [2024-04-18 16:56:34.110093] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:18.610 [2024-04-18 16:56:34.111013] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:18.610 [2024-04-18 16:56:34.111035] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:18.610 [2024-04-18 16:56:34.111044] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111068] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:18.610 [2024-04-18 16:56:34.111083] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111108] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:18.610 [2024-04-18 16:56:34.111118] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:18.610 [2024-04-18 16:56:34.111140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:18.610 [2024-04-18 16:56:34.111205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:18.610 [2024-04-18 16:56:34.111220] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:18.610 [2024-04-18 16:56:34.111228] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:18.610 [2024-04-18 16:56:34.111236] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:18.610 [2024-04-18 16:56:34.111243] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:18.610 [2024-04-18 16:56:34.111250] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:18.610 [2024-04-18 16:56:34.111258] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:18.610 [2024-04-18 16:56:34.111265] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111276] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:18.610 [2024-04-18 16:56:34.111308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:18.610 [2024-04-18 16:56:34.111331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:18.610 [2024-04-18 16:56:34.111344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:18.610 [2024-04-18 16:56:34.111355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:18.610 [2024-04-18 16:56:34.111390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:18.610 [2024-04-18 16:56:34.111402] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111419] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:18.610 [2024-04-18 16:56:34.111461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:18.610 [2024-04-18 16:56:34.111472] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:18.610 [2024-04-18 16:56:34.111481] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111495] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111506] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111520] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:18.610 [2024-04-18 16:56:34.111536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:18.610 [2024-04-18 16:56:34.111590] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111606] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111619] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:18.610 [2024-04-18 16:56:34.111628] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:18.610 [2024-04-18 16:56:34.111638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:18.610 [2024-04-18 16:56:34.111652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:18.610 [2024-04-18 16:56:34.111684] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:18.610 [2024-04-18 16:56:34.111705] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111719] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111731] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:18.610 [2024-04-18 16:56:34.111739] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:18.610 [2024-04-18 16:56:34.111763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:18.610 [2024-04-18 16:56:34.111787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:18.610 [2024-04-18 16:56:34.111808] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111822] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111834] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:18.610 [2024-04-18 16:56:34.111842] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:18.610 [2024-04-18 16:56:34.111851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:18.610 [2024-04-18 16:56:34.111864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:18.610 [2024-04-18 16:56:34.111878] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111889] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111902] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111912] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111920] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111928] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:18.610 [2024-04-18 16:56:34.111939] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:18.610 [2024-04-18 16:56:34.111947] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:18.610 [2024-04-18 16:56:34.111972] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:18.610 [2024-04-18 16:56:34.111991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:18.610 [2024-04-18 16:56:34.112008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:18.610 [2024-04-18 16:56:34.112020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:18.610 [2024-04-18 16:56:34.112035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:18.610 [2024-04-18 16:56:34.112046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:18.611 [2024-04-18 16:56:34.112061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:18.611 [2024-04-18 16:56:34.112072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:18.611 [2024-04-18 16:56:34.112089] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:18.611 [2024-04-18 16:56:34.112098] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:18.611 [2024-04-18 16:56:34.112104] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:18.611 [2024-04-18 16:56:34.112109] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:18.611 [2024-04-18 16:56:34.112119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:18.611 [2024-04-18 16:56:34.112130] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:18.611 [2024-04-18 16:56:34.112138] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:18.611 [2024-04-18 16:56:34.112147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:18.611 [2024-04-18 16:56:34.112158] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:18.611 [2024-04-18 16:56:34.112166] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:18.611 [2024-04-18 16:56:34.112174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:18.611 [2024-04-18 16:56:34.112186] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:18.611 [2024-04-18 16:56:34.112193] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:18.611 [2024-04-18 16:56:34.112202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:18.611 [2024-04-18 16:56:34.112213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:18.611 [2024-04-18 16:56:34.112234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:18.611 [2024-04-18 16:56:34.112249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:18.611 [2024-04-18 16:56:34.112264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:18.611 ===================================================== 00:09:18.611 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:18.611 ===================================================== 00:09:18.611 Controller Capabilities/Features 00:09:18.611 ================================ 00:09:18.611 Vendor ID: 4e58 00:09:18.611 Subsystem Vendor ID: 4e58 00:09:18.611 Serial Number: SPDK1 00:09:18.611 Model Number: SPDK bdev Controller 00:09:18.611 Firmware Version: 24.05 00:09:18.611 Recommended Arb Burst: 6 00:09:18.611 IEEE OUI Identifier: 8d 6b 50 00:09:18.611 Multi-path I/O 00:09:18.611 May have multiple subsystem ports: Yes 00:09:18.611 May have multiple controllers: Yes 00:09:18.611 Associated with SR-IOV VF: No 00:09:18.611 Max Data Transfer Size: 131072 00:09:18.611 Max Number of Namespaces: 32 00:09:18.611 Max Number of I/O Queues: 127 00:09:18.611 NVMe Specification Version (VS): 1.3 00:09:18.611 NVMe Specification Version (Identify): 1.3 00:09:18.611 Maximum Queue Entries: 256 00:09:18.611 Contiguous Queues Required: Yes 00:09:18.611 Arbitration Mechanisms Supported 00:09:18.611 Weighted Round Robin: Not Supported 00:09:18.611 Vendor Specific: Not Supported 00:09:18.611 Reset Timeout: 15000 ms 00:09:18.611 Doorbell Stride: 4 bytes 00:09:18.611 NVM Subsystem Reset: Not Supported 00:09:18.611 Command Sets Supported 00:09:18.611 NVM Command Set: Supported 00:09:18.611 Boot Partition: Not Supported 00:09:18.611 Memory Page Size Minimum: 4096 bytes 00:09:18.611 Memory Page Size Maximum: 4096 bytes 00:09:18.611 Persistent Memory Region: Not Supported 00:09:18.611 Optional Asynchronous Events Supported 00:09:18.611 Namespace Attribute Notices: Supported 00:09:18.611 Firmware Activation Notices: Not Supported 00:09:18.611 ANA Change Notices: Not Supported 00:09:18.611 PLE Aggregate Log Change Notices: Not Supported 00:09:18.611 LBA Status Info Alert Notices: Not Supported 00:09:18.611 EGE Aggregate Log Change Notices: Not Supported 00:09:18.611 Normal NVM Subsystem Shutdown event: Not Supported 00:09:18.611 Zone Descriptor Change Notices: Not Supported 00:09:18.611 Discovery Log Change Notices: Not Supported 00:09:18.611 Controller Attributes 00:09:18.611 128-bit Host Identifier: Supported 00:09:18.611 Non-Operational Permissive Mode: Not Supported 00:09:18.611 NVM Sets: Not Supported 00:09:18.611 Read Recovery Levels: Not Supported 00:09:18.611 Endurance Groups: Not Supported 00:09:18.611 Predictable Latency Mode: Not Supported 00:09:18.611 Traffic Based Keep ALive: Not Supported 00:09:18.611 Namespace Granularity: Not Supported 00:09:18.611 SQ Associations: Not Supported 00:09:18.611 UUID List: Not Supported 00:09:18.611 Multi-Domain Subsystem: Not Supported 00:09:18.611 Fixed Capacity Management: Not Supported 00:09:18.611 Variable Capacity Management: Not Supported 00:09:18.611 Delete Endurance Group: Not Supported 00:09:18.611 Delete NVM Set: Not Supported 00:09:18.611 Extended LBA Formats Supported: Not Supported 00:09:18.611 Flexible Data Placement Supported: Not Supported 00:09:18.611 00:09:18.611 Controller Memory Buffer Support 00:09:18.611 ================================ 00:09:18.611 Supported: No 00:09:18.611 00:09:18.611 Persistent Memory Region Support 00:09:18.611 ================================ 00:09:18.611 Supported: No 00:09:18.611 00:09:18.611 Admin Command Set Attributes 00:09:18.611 ============================ 00:09:18.611 Security Send/Receive: Not Supported 00:09:18.611 Format NVM: Not Supported 00:09:18.611 Firmware Activate/Download: Not Supported 00:09:18.611 Namespace Management: Not Supported 00:09:18.611 Device Self-Test: Not Supported 00:09:18.611 Directives: Not Supported 00:09:18.611 NVMe-MI: Not Supported 00:09:18.611 Virtualization Management: Not Supported 00:09:18.611 Doorbell Buffer Config: Not Supported 00:09:18.611 Get LBA Status Capability: Not Supported 00:09:18.611 Command & Feature Lockdown Capability: Not Supported 00:09:18.611 Abort Command Limit: 4 00:09:18.611 Async Event Request Limit: 4 00:09:18.611 Number of Firmware Slots: N/A 00:09:18.611 Firmware Slot 1 Read-Only: N/A 00:09:18.611 Firmware Activation Without Reset: N/A 00:09:18.611 Multiple Update Detection Support: N/A 00:09:18.611 Firmware Update Granularity: No Information Provided 00:09:18.611 Per-Namespace SMART Log: No 00:09:18.611 Asymmetric Namespace Access Log Page: Not Supported 00:09:18.611 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:18.611 Command Effects Log Page: Supported 00:09:18.611 Get Log Page Extended Data: Supported 00:09:18.611 Telemetry Log Pages: Not Supported 00:09:18.611 Persistent Event Log Pages: Not Supported 00:09:18.611 Supported Log Pages Log Page: May Support 00:09:18.611 Commands Supported & Effects Log Page: Not Supported 00:09:18.611 Feature Identifiers & Effects Log Page:May Support 00:09:18.611 NVMe-MI Commands & Effects Log Page: May Support 00:09:18.611 Data Area 4 for Telemetry Log: Not Supported 00:09:18.611 Error Log Page Entries Supported: 128 00:09:18.611 Keep Alive: Supported 00:09:18.611 Keep Alive Granularity: 10000 ms 00:09:18.611 00:09:18.611 NVM Command Set Attributes 00:09:18.611 ========================== 00:09:18.611 Submission Queue Entry Size 00:09:18.611 Max: 64 00:09:18.611 Min: 64 00:09:18.611 Completion Queue Entry Size 00:09:18.611 Max: 16 00:09:18.611 Min: 16 00:09:18.611 Number of Namespaces: 32 00:09:18.611 Compare Command: Supported 00:09:18.611 Write Uncorrectable Command: Not Supported 00:09:18.611 Dataset Management Command: Supported 00:09:18.611 Write Zeroes Command: Supported 00:09:18.611 Set Features Save Field: Not Supported 00:09:18.611 Reservations: Not Supported 00:09:18.611 Timestamp: Not Supported 00:09:18.611 Copy: Supported 00:09:18.611 Volatile Write Cache: Present 00:09:18.611 Atomic Write Unit (Normal): 1 00:09:18.611 Atomic Write Unit (PFail): 1 00:09:18.611 Atomic Compare & Write Unit: 1 00:09:18.611 Fused Compare & Write: Supported 00:09:18.611 Scatter-Gather List 00:09:18.611 SGL Command Set: Supported (Dword aligned) 00:09:18.611 SGL Keyed: Not Supported 00:09:18.611 SGL Bit Bucket Descriptor: Not Supported 00:09:18.611 SGL Metadata Pointer: Not Supported 00:09:18.611 Oversized SGL: Not Supported 00:09:18.611 SGL Metadata Address: Not Supported 00:09:18.611 SGL Offset: Not Supported 00:09:18.611 Transport SGL Data Block: Not Supported 00:09:18.611 Replay Protected Memory Block: Not Supported 00:09:18.611 00:09:18.611 Firmware Slot Information 00:09:18.611 ========================= 00:09:18.611 Active slot: 1 00:09:18.611 Slot 1 Firmware Revision: 24.05 00:09:18.611 00:09:18.611 00:09:18.611 Commands Supported and Effects 00:09:18.611 ============================== 00:09:18.611 Admin Commands 00:09:18.611 -------------- 00:09:18.611 Get Log Page (02h): Supported 00:09:18.611 Identify (06h): Supported 00:09:18.612 Abort (08h): Supported 00:09:18.612 Set Features (09h): Supported 00:09:18.612 Get Features (0Ah): Supported 00:09:18.612 Asynchronous Event Request (0Ch): Supported 00:09:18.612 Keep Alive (18h): Supported 00:09:18.612 I/O Commands 00:09:18.612 ------------ 00:09:18.612 Flush (00h): Supported LBA-Change 00:09:18.612 Write (01h): Supported LBA-Change 00:09:18.612 Read (02h): Supported 00:09:18.612 Compare (05h): Supported 00:09:18.612 Write Zeroes (08h): Supported LBA-Change 00:09:18.612 Dataset Management (09h): Supported LBA-Change 00:09:18.612 Copy (19h): Supported LBA-Change 00:09:18.612 Unknown (79h): Supported LBA-Change 00:09:18.612 Unknown (7Ah): Supported 00:09:18.612 00:09:18.612 Error Log 00:09:18.612 ========= 00:09:18.612 00:09:18.612 Arbitration 00:09:18.612 =========== 00:09:18.612 Arbitration Burst: 1 00:09:18.612 00:09:18.612 Power Management 00:09:18.612 ================ 00:09:18.612 Number of Power States: 1 00:09:18.612 Current Power State: Power State #0 00:09:18.612 Power State #0: 00:09:18.612 Max Power: 0.00 W 00:09:18.612 Non-Operational State: Operational 00:09:18.612 Entry Latency: Not Reported 00:09:18.612 Exit Latency: Not Reported 00:09:18.612 Relative Read Throughput: 0 00:09:18.612 Relative Read Latency: 0 00:09:18.612 Relative Write Throughput: 0 00:09:18.612 Relative Write Latency: 0 00:09:18.612 Idle Power: Not Reported 00:09:18.612 Active Power: Not Reported 00:09:18.612 Non-Operational Permissive Mode: Not Supported 00:09:18.612 00:09:18.612 Health Information 00:09:18.612 ================== 00:09:18.612 Critical Warnings: 00:09:18.612 Available Spare Space: OK 00:09:18.612 Temperature: OK 00:09:18.612 Device Reliability: OK 00:09:18.612 Read Only: No 00:09:18.612 Volatile Memory Backup: OK 00:09:18.612 Current Temperature: 0 Kelvin (-2[2024-04-18 16:56:34.112415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:18.612 [2024-04-18 16:56:34.112433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:18.612 [2024-04-18 16:56:34.112469] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:18.612 [2024-04-18 16:56:34.112486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:18.612 [2024-04-18 16:56:34.112497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:18.612 [2024-04-18 16:56:34.112507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:18.612 [2024-04-18 16:56:34.112517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:18.612 [2024-04-18 16:56:34.113028] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:18.612 [2024-04-18 16:56:34.113049] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:18.612 [2024-04-18 16:56:34.114028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:18.612 [2024-04-18 16:56:34.114104] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:18.612 [2024-04-18 16:56:34.114123] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:18.612 [2024-04-18 16:56:34.115394] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:18.612 [2024-04-18 16:56:34.115419] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 1 milliseconds 00:09:18.612 [2024-04-18 16:56:34.115476] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:18.612 [2024-04-18 16:56:34.120396] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:18.612 73 Celsius) 00:09:18.612 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:18.612 Available Spare: 0% 00:09:18.612 Available Spare Threshold: 0% 00:09:18.612 Life Percentage Used: 0% 00:09:18.612 Data Units Read: 0 00:09:18.612 Data Units Written: 0 00:09:18.612 Host Read Commands: 0 00:09:18.612 Host Write Commands: 0 00:09:18.612 Controller Busy Time: 0 minutes 00:09:18.612 Power Cycles: 0 00:09:18.612 Power On Hours: 0 hours 00:09:18.612 Unsafe Shutdowns: 0 00:09:18.612 Unrecoverable Media Errors: 0 00:09:18.612 Lifetime Error Log Entries: 0 00:09:18.612 Warning Temperature Time: 0 minutes 00:09:18.612 Critical Temperature Time: 0 minutes 00:09:18.612 00:09:18.612 Number of Queues 00:09:18.612 ================ 00:09:18.612 Number of I/O Submission Queues: 127 00:09:18.612 Number of I/O Completion Queues: 127 00:09:18.612 00:09:18.612 Active Namespaces 00:09:18.612 ================= 00:09:18.612 Namespace ID:1 00:09:18.612 Error Recovery Timeout: Unlimited 00:09:18.612 Command Set Identifier: NVM (00h) 00:09:18.612 Deallocate: Supported 00:09:18.612 Deallocated/Unwritten Error: Not Supported 00:09:18.612 Deallocated Read Value: Unknown 00:09:18.612 Deallocate in Write Zeroes: Not Supported 00:09:18.612 Deallocated Guard Field: 0xFFFF 00:09:18.612 Flush: Supported 00:09:18.612 Reservation: Supported 00:09:18.612 Namespace Sharing Capabilities: Multiple Controllers 00:09:18.612 Size (in LBAs): 131072 (0GiB) 00:09:18.612 Capacity (in LBAs): 131072 (0GiB) 00:09:18.612 Utilization (in LBAs): 131072 (0GiB) 00:09:18.612 NGUID: CAD99B0C67724743B8690B0C952D4454 00:09:18.612 UUID: cad99b0c-6772-4743-b869-0b0c952d4454 00:09:18.612 Thin Provisioning: Not Supported 00:09:18.612 Per-NS Atomic Units: Yes 00:09:18.612 Atomic Boundary Size (Normal): 0 00:09:18.612 Atomic Boundary Size (PFail): 0 00:09:18.612 Atomic Boundary Offset: 0 00:09:18.612 Maximum Single Source Range Length: 65535 00:09:18.612 Maximum Copy Length: 65535 00:09:18.612 Maximum Source Range Count: 1 00:09:18.612 NGUID/EUI64 Never Reused: No 00:09:18.612 Namespace Write Protected: No 00:09:18.612 Number of LBA Formats: 1 00:09:18.612 Current LBA Format: LBA Format #00 00:09:18.612 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.612 00:09:18.612 16:56:34 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:18.612 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.872 [2024-04-18 16:56:34.360269] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:24.149 [2024-04-18 16:56:39.382777] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:24.149 Initializing NVMe Controllers 00:09:24.149 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:24.149 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:24.149 Initialization complete. Launching workers. 00:09:24.149 ======================================================== 00:09:24.149 Latency(us) 00:09:24.149 Device Information : IOPS MiB/s Average min max 00:09:24.149 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34968.00 136.59 3661.10 1184.47 7466.98 00:09:24.149 ======================================================== 00:09:24.149 Total : 34968.00 136.59 3661.10 1184.47 7466.98 00:09:24.149 00:09:24.149 16:56:39 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:24.149 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.149 [2024-04-18 16:56:39.617990] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:29.432 [2024-04-18 16:56:44.660306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:29.432 Initializing NVMe Controllers 00:09:29.432 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:29.432 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:29.432 Initialization complete. Launching workers. 00:09:29.432 ======================================================== 00:09:29.432 Latency(us) 00:09:29.432 Device Information : IOPS MiB/s Average min max 00:09:29.432 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16010.17 62.54 8000.14 5868.01 15781.76 00:09:29.432 ======================================================== 00:09:29.432 Total : 16010.17 62.54 8000.14 5868.01 15781.76 00:09:29.432 00:09:29.432 16:56:44 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:29.432 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.432 [2024-04-18 16:56:44.870334] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:34.705 [2024-04-18 16:56:49.953835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:34.705 Initializing NVMe Controllers 00:09:34.705 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:34.705 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:34.705 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:34.705 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:34.705 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:34.705 Initialization complete. Launching workers. 00:09:34.705 Starting thread on core 2 00:09:34.705 Starting thread on core 3 00:09:34.705 Starting thread on core 1 00:09:34.705 16:56:50 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:34.705 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.705 [2024-04-18 16:56:50.266968] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:37.996 [2024-04-18 16:56:53.333546] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:37.996 Initializing NVMe Controllers 00:09:37.996 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:37.996 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:37.996 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:37.996 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:37.996 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:37.996 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:37.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:37.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:37.996 Initialization complete. Launching workers. 00:09:37.996 Starting thread on core 1 with urgent priority queue 00:09:37.996 Starting thread on core 2 with urgent priority queue 00:09:37.996 Starting thread on core 3 with urgent priority queue 00:09:37.996 Starting thread on core 0 with urgent priority queue 00:09:37.996 SPDK bdev Controller (SPDK1 ) core 0: 3046.67 IO/s 32.82 secs/100000 ios 00:09:37.996 SPDK bdev Controller (SPDK1 ) core 1: 2953.00 IO/s 33.86 secs/100000 ios 00:09:37.996 SPDK bdev Controller (SPDK1 ) core 2: 2794.67 IO/s 35.78 secs/100000 ios 00:09:37.996 SPDK bdev Controller (SPDK1 ) core 3: 3238.33 IO/s 30.88 secs/100000 ios 00:09:37.996 ======================================================== 00:09:37.996 00:09:37.996 16:56:53 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:37.996 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.996 [2024-04-18 16:56:53.621876] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:37.996 [2024-04-18 16:56:53.659415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:37.996 Initializing NVMe Controllers 00:09:37.996 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:37.996 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:37.996 Namespace ID: 1 size: 0GB 00:09:37.996 Initialization complete. 00:09:37.996 INFO: using host memory buffer for IO 00:09:37.996 Hello world! 00:09:38.254 16:56:53 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:38.254 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.254 [2024-04-18 16:56:53.946050] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:39.634 Initializing NVMe Controllers 00:09:39.634 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:39.634 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:39.634 Initialization complete. Launching workers. 00:09:39.634 submit (in ns) avg, min, max = 8315.7, 3493.3, 4046900.0 00:09:39.634 complete (in ns) avg, min, max = 22158.3, 2036.7, 4032520.0 00:09:39.634 00:09:39.634 Submit histogram 00:09:39.634 ================ 00:09:39.634 Range in us Cumulative Count 00:09:39.634 3.484 - 3.508: 0.2202% ( 30) 00:09:39.634 3.508 - 3.532: 1.1525% ( 127) 00:09:39.634 3.532 - 3.556: 3.4209% ( 309) 00:09:39.634 3.556 - 3.579: 7.7081% ( 584) 00:09:39.634 3.579 - 3.603: 15.3208% ( 1037) 00:09:39.634 3.603 - 3.627: 23.9759% ( 1179) 00:09:39.634 3.627 - 3.650: 33.2771% ( 1267) 00:09:39.634 3.650 - 3.674: 41.3596% ( 1101) 00:09:39.634 3.674 - 3.698: 49.7724% ( 1146) 00:09:39.634 3.698 - 3.721: 56.1738% ( 872) 00:09:39.634 3.721 - 3.745: 61.3346% ( 703) 00:09:39.634 3.745 - 3.769: 65.4016% ( 554) 00:09:39.634 3.769 - 3.793: 69.1969% ( 517) 00:09:39.634 3.793 - 3.816: 72.4930% ( 449) 00:09:39.634 3.816 - 3.840: 76.0608% ( 486) 00:09:39.634 3.840 - 3.864: 79.3863% ( 453) 00:09:39.634 3.864 - 3.887: 82.4475% ( 417) 00:09:39.634 3.887 - 3.911: 85.1710% ( 371) 00:09:39.634 3.911 - 3.935: 87.4908% ( 316) 00:09:39.634 3.935 - 3.959: 89.2013% ( 233) 00:09:39.634 3.959 - 3.982: 90.9264% ( 235) 00:09:39.634 3.982 - 4.006: 92.1597% ( 168) 00:09:39.634 4.006 - 4.030: 93.2242% ( 145) 00:09:39.634 4.030 - 4.053: 94.1418% ( 125) 00:09:39.634 4.053 - 4.077: 94.7878% ( 88) 00:09:39.634 4.077 - 4.101: 95.4779% ( 94) 00:09:39.634 4.101 - 4.124: 96.0358% ( 76) 00:09:39.634 4.124 - 4.148: 96.3588% ( 44) 00:09:39.634 4.148 - 4.172: 96.5644% ( 28) 00:09:39.634 4.172 - 4.196: 96.6892% ( 17) 00:09:39.634 4.196 - 4.219: 96.7552% ( 9) 00:09:39.634 4.219 - 4.243: 96.8360% ( 11) 00:09:39.634 4.243 - 4.267: 96.9168% ( 11) 00:09:39.634 4.267 - 4.290: 97.0636% ( 20) 00:09:39.634 4.290 - 4.314: 97.1590% ( 13) 00:09:39.634 4.314 - 4.338: 97.2398% ( 11) 00:09:39.634 4.338 - 4.361: 97.2838% ( 6) 00:09:39.634 4.361 - 4.385: 97.3058% ( 3) 00:09:39.634 4.385 - 4.409: 97.3352% ( 4) 00:09:39.634 4.433 - 4.456: 97.3572% ( 3) 00:09:39.634 4.456 - 4.480: 97.3646% ( 1) 00:09:39.634 4.480 - 4.504: 97.4086% ( 6) 00:09:39.634 4.504 - 4.527: 97.4380% ( 4) 00:09:39.634 4.527 - 4.551: 97.4747% ( 5) 00:09:39.634 4.551 - 4.575: 97.5407% ( 9) 00:09:39.634 4.575 - 4.599: 97.5995% ( 8) 00:09:39.634 4.599 - 4.622: 97.6435% ( 6) 00:09:39.634 4.622 - 4.646: 97.7096% ( 9) 00:09:39.634 4.646 - 4.670: 97.7316% ( 3) 00:09:39.634 4.670 - 4.693: 97.7977% ( 9) 00:09:39.634 4.693 - 4.717: 97.8417% ( 6) 00:09:39.634 4.717 - 4.741: 97.8564% ( 2) 00:09:39.634 4.741 - 4.764: 97.8637% ( 1) 00:09:39.634 4.764 - 4.788: 97.8858% ( 3) 00:09:39.634 4.788 - 4.812: 97.9005% ( 2) 00:09:39.634 4.812 - 4.836: 97.9372% ( 5) 00:09:39.634 4.836 - 4.859: 97.9812% ( 6) 00:09:39.634 4.859 - 4.883: 97.9959% ( 2) 00:09:39.634 4.883 - 4.907: 98.0326% ( 5) 00:09:39.634 4.907 - 4.930: 98.0546% ( 3) 00:09:39.634 4.930 - 4.954: 98.0840% ( 4) 00:09:39.634 4.954 - 4.978: 98.0987% ( 2) 00:09:39.634 4.978 - 5.001: 98.1133% ( 2) 00:09:39.634 5.001 - 5.025: 98.1207% ( 1) 00:09:39.634 5.049 - 5.073: 98.1427% ( 3) 00:09:39.634 5.073 - 5.096: 98.1501% ( 1) 00:09:39.634 5.096 - 5.120: 98.1574% ( 1) 00:09:39.634 5.144 - 5.167: 98.1647% ( 1) 00:09:39.634 5.215 - 5.239: 98.1794% ( 2) 00:09:39.634 5.286 - 5.310: 98.1868% ( 1) 00:09:39.634 5.310 - 5.333: 98.1941% ( 1) 00:09:39.634 5.641 - 5.665: 98.2014% ( 1) 00:09:39.634 5.736 - 5.760: 98.2088% ( 1) 00:09:39.634 5.831 - 5.855: 98.2161% ( 1) 00:09:39.634 5.879 - 5.902: 98.2235% ( 1) 00:09:39.634 6.210 - 6.258: 98.2381% ( 2) 00:09:39.634 6.637 - 6.684: 98.2455% ( 1) 00:09:39.634 6.684 - 6.732: 98.2602% ( 2) 00:09:39.634 6.779 - 6.827: 98.2675% ( 1) 00:09:39.634 6.827 - 6.874: 98.2748% ( 1) 00:09:39.634 6.921 - 6.969: 98.2822% ( 1) 00:09:39.634 7.016 - 7.064: 98.3042% ( 3) 00:09:39.634 7.064 - 7.111: 98.3116% ( 1) 00:09:39.634 7.111 - 7.159: 98.3336% ( 3) 00:09:39.634 7.159 - 7.206: 98.3409% ( 1) 00:09:39.634 7.253 - 7.301: 98.3483% ( 1) 00:09:39.634 7.301 - 7.348: 98.3629% ( 2) 00:09:39.634 7.348 - 7.396: 98.3776% ( 2) 00:09:39.634 7.396 - 7.443: 98.3850% ( 1) 00:09:39.634 7.538 - 7.585: 98.4143% ( 4) 00:09:39.634 7.585 - 7.633: 98.4217% ( 1) 00:09:39.634 7.633 - 7.680: 98.4364% ( 2) 00:09:39.634 7.680 - 7.727: 98.4437% ( 1) 00:09:39.634 7.727 - 7.775: 98.4584% ( 2) 00:09:39.634 7.775 - 7.822: 98.4731% ( 2) 00:09:39.634 7.917 - 7.964: 98.4877% ( 2) 00:09:39.634 7.964 - 8.012: 98.4951% ( 1) 00:09:39.634 8.059 - 8.107: 98.5098% ( 2) 00:09:39.634 8.107 - 8.154: 98.5244% ( 2) 00:09:39.634 8.154 - 8.201: 98.5465% ( 3) 00:09:39.634 8.201 - 8.249: 98.5538% ( 1) 00:09:39.634 8.249 - 8.296: 98.5685% ( 2) 00:09:39.634 8.296 - 8.344: 98.5758% ( 1) 00:09:39.634 8.344 - 8.391: 98.5905% ( 2) 00:09:39.634 8.486 - 8.533: 98.5979% ( 1) 00:09:39.634 8.628 - 8.676: 98.6125% ( 2) 00:09:39.634 8.723 - 8.770: 98.6199% ( 1) 00:09:39.634 8.770 - 8.818: 98.6272% ( 1) 00:09:39.634 8.818 - 8.865: 98.6346% ( 1) 00:09:39.634 8.865 - 8.913: 98.6419% ( 1) 00:09:39.634 9.007 - 9.055: 98.6566% ( 2) 00:09:39.634 9.055 - 9.102: 98.6639% ( 1) 00:09:39.634 9.150 - 9.197: 98.6713% ( 1) 00:09:39.634 9.197 - 9.244: 98.6786% ( 1) 00:09:39.634 9.292 - 9.339: 98.6859% ( 1) 00:09:39.634 9.529 - 9.576: 98.7006% ( 2) 00:09:39.634 9.624 - 9.671: 98.7227% ( 3) 00:09:39.634 9.671 - 9.719: 98.7300% ( 1) 00:09:39.634 9.719 - 9.766: 98.7373% ( 1) 00:09:39.634 9.861 - 9.908: 98.7447% ( 1) 00:09:39.634 10.003 - 10.050: 98.7594% ( 2) 00:09:39.634 10.098 - 10.145: 98.7740% ( 2) 00:09:39.634 10.145 - 10.193: 98.7887% ( 2) 00:09:39.634 10.193 - 10.240: 98.7961% ( 1) 00:09:39.634 10.287 - 10.335: 98.8034% ( 1) 00:09:39.634 10.335 - 10.382: 98.8107% ( 1) 00:09:39.634 10.477 - 10.524: 98.8254% ( 2) 00:09:39.634 10.524 - 10.572: 98.8401% ( 2) 00:09:39.634 10.572 - 10.619: 98.8475% ( 1) 00:09:39.634 10.667 - 10.714: 98.8548% ( 1) 00:09:39.634 10.761 - 10.809: 98.8621% ( 1) 00:09:39.634 10.856 - 10.904: 98.8695% ( 1) 00:09:39.634 10.904 - 10.951: 98.8768% ( 1) 00:09:39.634 10.951 - 10.999: 98.8842% ( 1) 00:09:39.634 10.999 - 11.046: 98.8915% ( 1) 00:09:39.634 11.425 - 11.473: 98.9062% ( 2) 00:09:39.634 11.804 - 11.852: 98.9282% ( 3) 00:09:39.634 11.852 - 11.899: 98.9355% ( 1) 00:09:39.634 11.899 - 11.947: 98.9429% ( 1) 00:09:39.634 11.947 - 11.994: 98.9576% ( 2) 00:09:39.634 11.994 - 12.041: 98.9723% ( 2) 00:09:39.634 12.089 - 12.136: 98.9796% ( 1) 00:09:39.634 12.231 - 12.326: 98.9943% ( 2) 00:09:39.634 12.326 - 12.421: 99.0016% ( 1) 00:09:39.634 12.421 - 12.516: 99.0090% ( 1) 00:09:39.634 12.516 - 12.610: 99.0163% ( 1) 00:09:39.635 12.610 - 12.705: 99.0236% ( 1) 00:09:39.635 12.705 - 12.800: 99.0310% ( 1) 00:09:39.635 13.084 - 13.179: 99.0457% ( 2) 00:09:39.635 13.274 - 13.369: 99.0530% ( 1) 00:09:39.635 13.369 - 13.464: 99.0603% ( 1) 00:09:39.635 13.464 - 13.559: 99.0677% ( 1) 00:09:39.635 13.653 - 13.748: 99.0897% ( 3) 00:09:39.635 13.938 - 14.033: 99.0970% ( 1) 00:09:39.635 14.033 - 14.127: 99.1044% ( 1) 00:09:39.635 14.127 - 14.222: 99.1117% ( 1) 00:09:39.635 14.222 - 14.317: 99.1191% ( 1) 00:09:39.635 14.317 - 14.412: 99.1264% ( 1) 00:09:39.635 15.076 - 15.170: 99.1338% ( 1) 00:09:39.635 15.265 - 15.360: 99.1411% ( 1) 00:09:39.635 15.644 - 15.739: 99.1484% ( 1) 00:09:39.635 17.161 - 17.256: 99.1631% ( 2) 00:09:39.635 17.351 - 17.446: 99.1925% ( 4) 00:09:39.635 17.446 - 17.541: 99.2292% ( 5) 00:09:39.635 17.541 - 17.636: 99.2439% ( 2) 00:09:39.635 17.636 - 17.730: 99.2953% ( 7) 00:09:39.635 17.730 - 17.825: 99.3246% ( 4) 00:09:39.635 17.825 - 17.920: 99.3834% ( 8) 00:09:39.635 17.920 - 18.015: 99.4201% ( 5) 00:09:39.635 18.015 - 18.110: 99.4568% ( 5) 00:09:39.635 18.110 - 18.204: 99.5228% ( 9) 00:09:39.635 18.204 - 18.299: 99.5889% ( 9) 00:09:39.635 18.299 - 18.394: 99.6183% ( 4) 00:09:39.635 18.394 - 18.489: 99.6623% ( 6) 00:09:39.635 18.489 - 18.584: 99.6843% ( 3) 00:09:39.635 18.584 - 18.679: 99.7064% ( 3) 00:09:39.635 18.679 - 18.773: 99.7724% ( 9) 00:09:39.635 18.773 - 18.868: 99.8091% ( 5) 00:09:39.635 18.868 - 18.963: 99.8238% ( 2) 00:09:39.635 18.963 - 19.058: 99.8312% ( 1) 00:09:39.635 19.058 - 19.153: 99.8458% ( 2) 00:09:39.635 19.342 - 19.437: 99.8532% ( 1) 00:09:39.635 19.532 - 19.627: 99.8605% ( 1) 00:09:39.635 23.514 - 23.609: 99.8679% ( 1) 00:09:39.635 23.893 - 23.988: 99.8752% ( 1) 00:09:39.635 26.169 - 26.359: 99.8825% ( 1) 00:09:39.635 36.978 - 37.167: 99.8899% ( 1) 00:09:39.635 3980.705 - 4004.978: 99.9780% ( 12) 00:09:39.635 4004.978 - 4029.250: 99.9927% ( 2) 00:09:39.635 4029.250 - 4053.523: 100.0000% ( 1) 00:09:39.635 00:09:39.635 Complete histogram 00:09:39.635 ================== 00:09:39.635 Range in us Cumulative Count 00:09:39.635 2.027 - 2.039: 0.0147% ( 2) 00:09:39.635 2.039 - 2.050: 5.0433% ( 685) 00:09:39.635 2.050 - 2.062: 12.9203% ( 1073) 00:09:39.635 2.062 - 2.074: 14.7335% ( 247) 00:09:39.635 2.074 - 2.086: 44.1051% ( 4001) 00:09:39.635 2.086 - 2.098: 58.5377% ( 1966) 00:09:39.635 2.098 - 2.110: 61.1144% ( 351) 00:09:39.635 2.110 - 2.121: 65.7466% ( 631) 00:09:39.635 2.121 - 2.133: 67.1414% ( 190) 00:09:39.635 2.133 - 2.145: 69.7915% ( 361) 00:09:39.635 2.145 - 2.157: 83.0935% ( 1812) 00:09:39.635 2.157 - 2.169: 87.4394% ( 592) 00:09:39.635 2.169 - 2.181: 88.6801% ( 169) 00:09:39.635 2.181 - 2.193: 89.7225% ( 142) 00:09:39.635 2.193 - 2.204: 90.7062% ( 134) 00:09:39.635 2.204 - 2.216: 91.3522% ( 88) 00:09:39.635 2.216 - 2.228: 93.4665% ( 288) 00:09:39.635 2.228 - 2.240: 94.8392% ( 187) 00:09:39.635 2.240 - 2.252: 95.2650% ( 58) 00:09:39.635 2.252 - 2.264: 95.5073% ( 33) 00:09:39.635 2.264 - 2.276: 95.6321% ( 17) 00:09:39.635 2.276 - 2.287: 95.7202% ( 12) 00:09:39.635 2.287 - 2.299: 95.8817% ( 22) 00:09:39.635 2.299 - 2.311: 96.0872% ( 28) 00:09:39.635 2.311 - 2.323: 96.2854% ( 27) 00:09:39.635 2.323 - 2.335: 96.3809% ( 13) 00:09:39.635 2.335 - 2.347: 96.5497% ( 23) 00:09:39.635 2.347 - 2.359: 96.7846% ( 32) 00:09:39.635 2.359 - 2.370: 97.0269% ( 33) 00:09:39.635 2.370 - 2.382: 97.2911% ( 36) 00:09:39.635 2.382 - 2.394: 97.6142% ( 44) 00:09:39.635 2.394 - 2.406: 97.8564% ( 33) 00:09:39.635 2.406 - 2.418: 98.0399% ( 25) 00:09:39.635 2.418 - 2.430: 98.1427% ( 14) 00:09:39.635 2.430 - 2.441: 98.2088% ( 9) 00:09:39.635 2.441 - 2.453: 98.2675% ( 8) 00:09:39.635 2.453 - 2.465: 98.3262% ( 8) 00:09:39.635 2.465 - 2.477: 98.3483% ( 3) 00:09:39.635 2.477 - 2.489: 98.3703% ( 3) 00:09:39.635 2.501 - 2.513: 98.3996% ( 4) 00:09:39.635 2.513 - 2.524: 98.4143% ( 2) 00:09:39.635 2.524 - 2.536: 98.4217% ( 1) 00:09:39.635 2.548 - 2.560: 98.4290% ( 1) 00:09:39.635 2.560 - 2.572: 98.4437% ( 2) 00:09:39.635 2.596 - 2.607: 98.4584% ( 2) 00:09:39.635 2.655 - 2.667: 98.4657% ( 1) 00:09:39.635 2.667 - 2.679: 98.4731% ( 1) 00:09:39.635 2.679 - 2.690: 98.4804% ( 1) 00:09:39.635 2.714 - 2.726: 98.4877% ( 1) 00:09:39.635 2.797 - 2.809: 98.4951% ( 1) 00:09:39.635 3.129 - 3.153: 98.5024% ( 1) 00:09:39.635 3.153 - 3.176: 98.5098% ( 1) 00:09:39.635 3.176 - 3.200: 98.5171% ( 1) 00:09:39.635 3.200 - 3.224: 98.5318% ( 2) 00:09:39.635 3.224 - 3.247: 98.5465% ( 2) 00:09:39.635 3.247 - 3.271: 98.5612% ( 2) 00:09:39.635 3.271 - 3.295: 98.5758% ( 2) 00:09:39.635 3.319 - 3.342: 98.5832% ( 1) 00:09:39.635 3.342 - 3.366: 98.5979% ( 2) 00:09:39.635 3.366 - 3.390: 98.6052% ( 1) 00:09:39.635 3.390 - 3.413: 98.6199% ( 2) 00:09:39.635 3.413 - 3.437: 9[2024-04-18 16:56:54.972197] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:39.635 8.6492% ( 4) 00:09:39.635 3.437 - 3.461: 98.6639% ( 2) 00:09:39.635 3.461 - 3.484: 98.6786% ( 2) 00:09:39.635 3.484 - 3.508: 98.6859% ( 1) 00:09:39.635 3.508 - 3.532: 98.6933% ( 1) 00:09:39.635 3.532 - 3.556: 98.7006% ( 1) 00:09:39.635 3.556 - 3.579: 98.7153% ( 2) 00:09:39.635 3.627 - 3.650: 98.7300% ( 2) 00:09:39.635 3.674 - 3.698: 98.7373% ( 1) 00:09:39.635 3.698 - 3.721: 98.7520% ( 2) 00:09:39.635 3.816 - 3.840: 98.7594% ( 1) 00:09:39.635 3.887 - 3.911: 98.7667% ( 1) 00:09:39.635 3.959 - 3.982: 98.7740% ( 1) 00:09:39.635 4.907 - 4.930: 98.7814% ( 1) 00:09:39.635 4.978 - 5.001: 98.7887% ( 1) 00:09:39.635 5.120 - 5.144: 98.7961% ( 1) 00:09:39.635 5.144 - 5.167: 98.8034% ( 1) 00:09:39.635 5.333 - 5.357: 98.8107% ( 1) 00:09:39.635 5.499 - 5.523: 98.8181% ( 1) 00:09:39.635 5.547 - 5.570: 98.8254% ( 1) 00:09:39.635 5.594 - 5.618: 98.8328% ( 1) 00:09:39.635 5.618 - 5.641: 98.8401% ( 1) 00:09:39.635 5.713 - 5.736: 98.8475% ( 1) 00:09:39.635 5.973 - 5.997: 98.8548% ( 1) 00:09:39.635 6.021 - 6.044: 98.8695% ( 2) 00:09:39.635 6.068 - 6.116: 98.8842% ( 2) 00:09:39.635 6.116 - 6.163: 98.8915% ( 1) 00:09:39.635 6.163 - 6.210: 98.8988% ( 1) 00:09:39.635 6.258 - 6.305: 98.9062% ( 1) 00:09:39.635 6.542 - 6.590: 98.9135% ( 1) 00:09:39.635 6.637 - 6.684: 98.9209% ( 1) 00:09:39.635 6.684 - 6.732: 98.9282% ( 1) 00:09:39.635 6.779 - 6.827: 98.9355% ( 1) 00:09:39.635 6.827 - 6.874: 98.9429% ( 1) 00:09:39.635 6.921 - 6.969: 98.9502% ( 1) 00:09:39.635 7.253 - 7.301: 98.9649% ( 2) 00:09:39.635 7.301 - 7.348: 98.9723% ( 1) 00:09:39.635 7.822 - 7.870: 98.9796% ( 1) 00:09:39.635 7.870 - 7.917: 98.9869% ( 1) 00:09:39.635 8.107 - 8.154: 98.9943% ( 1) 00:09:39.635 10.145 - 10.193: 99.0016% ( 1) 00:09:39.635 10.951 - 10.999: 99.0090% ( 1) 00:09:39.635 11.473 - 11.520: 99.0163% ( 1) 00:09:39.635 15.644 - 15.739: 99.0310% ( 2) 00:09:39.635 15.739 - 15.834: 99.0677% ( 5) 00:09:39.635 15.834 - 15.929: 99.0970% ( 4) 00:09:39.635 15.929 - 16.024: 99.1044% ( 1) 00:09:39.635 16.024 - 16.119: 99.1484% ( 6) 00:09:39.635 16.119 - 16.213: 99.1778% ( 4) 00:09:39.635 16.213 - 16.308: 99.2145% ( 5) 00:09:39.635 16.308 - 16.403: 99.2365% ( 3) 00:09:39.635 16.403 - 16.498: 99.2953% ( 8) 00:09:39.635 16.498 - 16.593: 99.3760% ( 11) 00:09:39.635 16.593 - 16.687: 99.4054% ( 4) 00:09:39.635 16.687 - 16.782: 99.4127% ( 1) 00:09:39.635 16.782 - 16.877: 99.4274% ( 2) 00:09:39.635 16.877 - 16.972: 99.4421% ( 2) 00:09:39.635 16.972 - 17.067: 99.4568% ( 2) 00:09:39.635 17.067 - 17.161: 99.4641% ( 1) 00:09:39.635 17.161 - 17.256: 99.4788% ( 2) 00:09:39.635 17.256 - 17.351: 99.4861% ( 1) 00:09:39.635 17.730 - 17.825: 99.4935% ( 1) 00:09:39.635 17.825 - 17.920: 99.5008% ( 1) 00:09:39.635 3980.705 - 4004.978: 99.9706% ( 64) 00:09:39.635 4004.978 - 4029.250: 99.9927% ( 3) 00:09:39.636 4029.250 - 4053.523: 100.0000% ( 1) 00:09:39.636 00:09:39.636 16:56:55 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:09:39.636 16:56:55 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:39.636 16:56:55 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:09:39.636 16:56:55 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:09:39.636 16:56:55 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:39.636 [2024-04-18 16:56:55.243759] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:39.636 [ 00:09:39.636 { 00:09:39.636 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:39.636 "subtype": "Discovery", 00:09:39.636 "listen_addresses": [], 00:09:39.636 "allow_any_host": true, 00:09:39.636 "hosts": [] 00:09:39.636 }, 00:09:39.636 { 00:09:39.636 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:39.636 "subtype": "NVMe", 00:09:39.636 "listen_addresses": [ 00:09:39.636 { 00:09:39.636 "transport": "VFIOUSER", 00:09:39.636 "trtype": "VFIOUSER", 00:09:39.636 "adrfam": "IPv4", 00:09:39.636 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:39.636 "trsvcid": "0" 00:09:39.636 } 00:09:39.636 ], 00:09:39.636 "allow_any_host": true, 00:09:39.636 "hosts": [], 00:09:39.636 "serial_number": "SPDK1", 00:09:39.636 "model_number": "SPDK bdev Controller", 00:09:39.636 "max_namespaces": 32, 00:09:39.636 "min_cntlid": 1, 00:09:39.636 "max_cntlid": 65519, 00:09:39.636 "namespaces": [ 00:09:39.636 { 00:09:39.636 "nsid": 1, 00:09:39.636 "bdev_name": "Malloc1", 00:09:39.636 "name": "Malloc1", 00:09:39.636 "nguid": "CAD99B0C67724743B8690B0C952D4454", 00:09:39.636 "uuid": "cad99b0c-6772-4743-b869-0b0c952d4454" 00:09:39.636 } 00:09:39.636 ] 00:09:39.636 }, 00:09:39.636 { 00:09:39.636 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:39.636 "subtype": "NVMe", 00:09:39.636 "listen_addresses": [ 00:09:39.636 { 00:09:39.636 "transport": "VFIOUSER", 00:09:39.636 "trtype": "VFIOUSER", 00:09:39.636 "adrfam": "IPv4", 00:09:39.636 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:39.636 "trsvcid": "0" 00:09:39.636 } 00:09:39.636 ], 00:09:39.636 "allow_any_host": true, 00:09:39.636 "hosts": [], 00:09:39.636 "serial_number": "SPDK2", 00:09:39.636 "model_number": "SPDK bdev Controller", 00:09:39.636 "max_namespaces": 32, 00:09:39.636 "min_cntlid": 1, 00:09:39.636 "max_cntlid": 65519, 00:09:39.636 "namespaces": [ 00:09:39.636 { 00:09:39.636 "nsid": 1, 00:09:39.636 "bdev_name": "Malloc2", 00:09:39.636 "name": "Malloc2", 00:09:39.636 "nguid": "55194E22B1E749848CF188FD5205EBAE", 00:09:39.636 "uuid": "55194e22-b1e7-4984-8cf1-88fd5205ebae" 00:09:39.636 } 00:09:39.636 ] 00:09:39.636 } 00:09:39.636 ] 00:09:39.636 16:56:55 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:39.636 16:56:55 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1636990 00:09:39.636 16:56:55 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:09:39.636 16:56:55 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:39.636 16:56:55 -- common/autotest_common.sh@1251 -- # local i=0 00:09:39.636 16:56:55 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:39.636 16:56:55 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:39.636 16:56:55 -- common/autotest_common.sh@1262 -- # return 0 00:09:39.636 16:56:55 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:39.636 16:56:55 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:09:39.636 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.894 [2024-04-18 16:56:55.414208] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:39.894 Malloc3 00:09:39.894 16:56:55 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:09:40.153 [2024-04-18 16:56:55.761906] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:40.153 16:56:55 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:40.153 Asynchronous Event Request test 00:09:40.153 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:40.153 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:40.153 Registering asynchronous event callbacks... 00:09:40.153 Starting namespace attribute notice tests for all controllers... 00:09:40.153 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:40.153 aer_cb - Changed Namespace 00:09:40.153 Cleaning up... 00:09:40.413 [ 00:09:40.413 { 00:09:40.413 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:40.413 "subtype": "Discovery", 00:09:40.413 "listen_addresses": [], 00:09:40.413 "allow_any_host": true, 00:09:40.413 "hosts": [] 00:09:40.413 }, 00:09:40.413 { 00:09:40.413 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:40.413 "subtype": "NVMe", 00:09:40.413 "listen_addresses": [ 00:09:40.413 { 00:09:40.413 "transport": "VFIOUSER", 00:09:40.413 "trtype": "VFIOUSER", 00:09:40.413 "adrfam": "IPv4", 00:09:40.413 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:40.413 "trsvcid": "0" 00:09:40.413 } 00:09:40.413 ], 00:09:40.413 "allow_any_host": true, 00:09:40.413 "hosts": [], 00:09:40.413 "serial_number": "SPDK1", 00:09:40.413 "model_number": "SPDK bdev Controller", 00:09:40.413 "max_namespaces": 32, 00:09:40.413 "min_cntlid": 1, 00:09:40.413 "max_cntlid": 65519, 00:09:40.413 "namespaces": [ 00:09:40.413 { 00:09:40.413 "nsid": 1, 00:09:40.413 "bdev_name": "Malloc1", 00:09:40.413 "name": "Malloc1", 00:09:40.413 "nguid": "CAD99B0C67724743B8690B0C952D4454", 00:09:40.413 "uuid": "cad99b0c-6772-4743-b869-0b0c952d4454" 00:09:40.413 }, 00:09:40.413 { 00:09:40.413 "nsid": 2, 00:09:40.413 "bdev_name": "Malloc3", 00:09:40.413 "name": "Malloc3", 00:09:40.413 "nguid": "03004E8B4EBA4E6FADA2DF06E1FCA481", 00:09:40.413 "uuid": "03004e8b-4eba-4e6f-ada2-df06e1fca481" 00:09:40.413 } 00:09:40.413 ] 00:09:40.413 }, 00:09:40.413 { 00:09:40.413 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:40.413 "subtype": "NVMe", 00:09:40.413 "listen_addresses": [ 00:09:40.413 { 00:09:40.413 "transport": "VFIOUSER", 00:09:40.413 "trtype": "VFIOUSER", 00:09:40.413 "adrfam": "IPv4", 00:09:40.413 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:40.413 "trsvcid": "0" 00:09:40.413 } 00:09:40.413 ], 00:09:40.413 "allow_any_host": true, 00:09:40.413 "hosts": [], 00:09:40.413 "serial_number": "SPDK2", 00:09:40.413 "model_number": "SPDK bdev Controller", 00:09:40.413 "max_namespaces": 32, 00:09:40.413 "min_cntlid": 1, 00:09:40.413 "max_cntlid": 65519, 00:09:40.413 "namespaces": [ 00:09:40.413 { 00:09:40.413 "nsid": 1, 00:09:40.413 "bdev_name": "Malloc2", 00:09:40.413 "name": "Malloc2", 00:09:40.413 "nguid": "55194E22B1E749848CF188FD5205EBAE", 00:09:40.413 "uuid": "55194e22-b1e7-4984-8cf1-88fd5205ebae" 00:09:40.414 } 00:09:40.414 ] 00:09:40.414 } 00:09:40.414 ] 00:09:40.414 16:56:56 -- target/nvmf_vfio_user.sh@44 -- # wait 1636990 00:09:40.414 16:56:56 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:40.414 16:56:56 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:40.414 16:56:56 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:09:40.414 16:56:56 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:40.414 [2024-04-18 16:56:56.054240] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:09:40.414 [2024-04-18 16:56:56.054274] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637115 ] 00:09:40.414 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.414 [2024-04-18 16:56:56.087454] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:09:40.414 [2024-04-18 16:56:56.096899] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:40.414 [2024-04-18 16:56:56.096928] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3964a8d000 00:09:40.414 [2024-04-18 16:56:56.097899] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:40.414 [2024-04-18 16:56:56.098900] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:40.414 [2024-04-18 16:56:56.099913] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:40.414 [2024-04-18 16:56:56.100913] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:40.414 [2024-04-18 16:56:56.101924] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:40.414 [2024-04-18 16:56:56.102927] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:40.414 [2024-04-18 16:56:56.103939] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:40.414 [2024-04-18 16:56:56.104940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:40.414 [2024-04-18 16:56:56.105951] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:40.414 [2024-04-18 16:56:56.105976] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3964a82000 00:09:40.414 [2024-04-18 16:56:56.107090] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:40.674 [2024-04-18 16:56:56.121134] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:09:40.674 [2024-04-18 16:56:56.121171] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:09:40.674 [2024-04-18 16:56:56.126285] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:40.674 [2024-04-18 16:56:56.126335] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:40.674 [2024-04-18 16:56:56.126441] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:09:40.674 [2024-04-18 16:56:56.126469] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:09:40.674 [2024-04-18 16:56:56.126479] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:09:40.674 [2024-04-18 16:56:56.127294] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:09:40.674 [2024-04-18 16:56:56.127314] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:09:40.674 [2024-04-18 16:56:56.127327] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:09:40.674 [2024-04-18 16:56:56.128296] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:40.674 [2024-04-18 16:56:56.128316] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:09:40.674 [2024-04-18 16:56:56.128329] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:09:40.674 [2024-04-18 16:56:56.129303] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:09:40.674 [2024-04-18 16:56:56.129323] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:40.674 [2024-04-18 16:56:56.130313] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:09:40.674 [2024-04-18 16:56:56.130332] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:09:40.674 [2024-04-18 16:56:56.130342] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:09:40.674 [2024-04-18 16:56:56.130353] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:40.675 [2024-04-18 16:56:56.130463] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:09:40.675 [2024-04-18 16:56:56.130474] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:40.675 [2024-04-18 16:56:56.130483] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:09:40.675 [2024-04-18 16:56:56.132394] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:09:40.675 [2024-04-18 16:56:56.133337] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:09:40.675 [2024-04-18 16:56:56.134349] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:40.675 [2024-04-18 16:56:56.135344] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:40.675 [2024-04-18 16:56:56.135445] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:40.675 [2024-04-18 16:56:56.136385] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:09:40.675 [2024-04-18 16:56:56.136409] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:40.675 [2024-04-18 16:56:56.136419] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.136444] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:09:40.675 [2024-04-18 16:56:56.136458] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.136482] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:40.675 [2024-04-18 16:56:56.136492] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:40.675 [2024-04-18 16:56:56.136510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:40.675 [2024-04-18 16:56:56.142396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:40.675 [2024-04-18 16:56:56.142418] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:09:40.675 [2024-04-18 16:56:56.142427] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:09:40.675 [2024-04-18 16:56:56.142435] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:09:40.675 [2024-04-18 16:56:56.142442] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:40.675 [2024-04-18 16:56:56.142450] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:09:40.675 [2024-04-18 16:56:56.142458] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:09:40.675 [2024-04-18 16:56:56.142466] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.142478] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.142494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:40.675 [2024-04-18 16:56:56.150392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:40.675 [2024-04-18 16:56:56.150420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:40.675 [2024-04-18 16:56:56.150435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:40.675 [2024-04-18 16:56:56.150447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:40.675 [2024-04-18 16:56:56.150459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:40.675 [2024-04-18 16:56:56.150468] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.150486] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.150502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:40.675 [2024-04-18 16:56:56.156393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:40.675 [2024-04-18 16:56:56.156422] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:09:40.675 [2024-04-18 16:56:56.156431] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.156447] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.156458] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.156471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:40.675 [2024-04-18 16:56:56.166411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:40.675 [2024-04-18 16:56:56.166470] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.166486] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.166500] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:40.675 [2024-04-18 16:56:56.166509] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:40.675 [2024-04-18 16:56:56.166519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:40.675 [2024-04-18 16:56:56.174393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:40.675 [2024-04-18 16:56:56.174421] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:09:40.675 [2024-04-18 16:56:56.174438] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.174452] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.174465] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:40.675 [2024-04-18 16:56:56.174473] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:40.675 [2024-04-18 16:56:56.174483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:40.675 [2024-04-18 16:56:56.182397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:40.675 [2024-04-18 16:56:56.182424] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.182440] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.182453] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:40.675 [2024-04-18 16:56:56.182462] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:40.675 [2024-04-18 16:56:56.182476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:40.675 [2024-04-18 16:56:56.190393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:40.675 [2024-04-18 16:56:56.190414] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.190426] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.190440] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.190450] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.190459] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.190468] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:09:40.675 [2024-04-18 16:56:56.190476] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:09:40.675 [2024-04-18 16:56:56.190484] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:09:40.675 [2024-04-18 16:56:56.190508] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:40.675 [2024-04-18 16:56:56.198396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:40.675 [2024-04-18 16:56:56.198423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:40.675 [2024-04-18 16:56:56.206395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:40.675 [2024-04-18 16:56:56.206420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:40.675 [2024-04-18 16:56:56.214391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:40.675 [2024-04-18 16:56:56.214417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:40.675 [2024-04-18 16:56:56.222393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:40.675 [2024-04-18 16:56:56.222427] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:40.675 [2024-04-18 16:56:56.222438] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:40.675 [2024-04-18 16:56:56.222444] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:40.675 [2024-04-18 16:56:56.222450] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:40.675 [2024-04-18 16:56:56.222460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:40.676 [2024-04-18 16:56:56.222472] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:40.676 [2024-04-18 16:56:56.222482] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:40.676 [2024-04-18 16:56:56.222492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:40.676 [2024-04-18 16:56:56.222508] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:40.676 [2024-04-18 16:56:56.222518] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:40.676 [2024-04-18 16:56:56.222527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:40.676 [2024-04-18 16:56:56.222539] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:40.676 [2024-04-18 16:56:56.222547] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:40.676 [2024-04-18 16:56:56.222556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:40.676 [2024-04-18 16:56:56.230408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:40.676 [2024-04-18 16:56:56.230448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:40.676 [2024-04-18 16:56:56.230465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:40.676 [2024-04-18 16:56:56.230477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:40.676 ===================================================== 00:09:40.676 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:40.676 ===================================================== 00:09:40.676 Controller Capabilities/Features 00:09:40.676 ================================ 00:09:40.676 Vendor ID: 4e58 00:09:40.676 Subsystem Vendor ID: 4e58 00:09:40.676 Serial Number: SPDK2 00:09:40.676 Model Number: SPDK bdev Controller 00:09:40.676 Firmware Version: 24.05 00:09:40.676 Recommended Arb Burst: 6 00:09:40.676 IEEE OUI Identifier: 8d 6b 50 00:09:40.676 Multi-path I/O 00:09:40.676 May have multiple subsystem ports: Yes 00:09:40.676 May have multiple controllers: Yes 00:09:40.676 Associated with SR-IOV VF: No 00:09:40.676 Max Data Transfer Size: 131072 00:09:40.676 Max Number of Namespaces: 32 00:09:40.676 Max Number of I/O Queues: 127 00:09:40.676 NVMe Specification Version (VS): 1.3 00:09:40.676 NVMe Specification Version (Identify): 1.3 00:09:40.676 Maximum Queue Entries: 256 00:09:40.676 Contiguous Queues Required: Yes 00:09:40.676 Arbitration Mechanisms Supported 00:09:40.676 Weighted Round Robin: Not Supported 00:09:40.676 Vendor Specific: Not Supported 00:09:40.676 Reset Timeout: 15000 ms 00:09:40.676 Doorbell Stride: 4 bytes 00:09:40.676 NVM Subsystem Reset: Not Supported 00:09:40.676 Command Sets Supported 00:09:40.676 NVM Command Set: Supported 00:09:40.676 Boot Partition: Not Supported 00:09:40.676 Memory Page Size Minimum: 4096 bytes 00:09:40.676 Memory Page Size Maximum: 4096 bytes 00:09:40.676 Persistent Memory Region: Not Supported 00:09:40.676 Optional Asynchronous Events Supported 00:09:40.676 Namespace Attribute Notices: Supported 00:09:40.676 Firmware Activation Notices: Not Supported 00:09:40.676 ANA Change Notices: Not Supported 00:09:40.676 PLE Aggregate Log Change Notices: Not Supported 00:09:40.676 LBA Status Info Alert Notices: Not Supported 00:09:40.676 EGE Aggregate Log Change Notices: Not Supported 00:09:40.676 Normal NVM Subsystem Shutdown event: Not Supported 00:09:40.676 Zone Descriptor Change Notices: Not Supported 00:09:40.676 Discovery Log Change Notices: Not Supported 00:09:40.676 Controller Attributes 00:09:40.676 128-bit Host Identifier: Supported 00:09:40.676 Non-Operational Permissive Mode: Not Supported 00:09:40.676 NVM Sets: Not Supported 00:09:40.676 Read Recovery Levels: Not Supported 00:09:40.676 Endurance Groups: Not Supported 00:09:40.676 Predictable Latency Mode: Not Supported 00:09:40.676 Traffic Based Keep ALive: Not Supported 00:09:40.676 Namespace Granularity: Not Supported 00:09:40.676 SQ Associations: Not Supported 00:09:40.676 UUID List: Not Supported 00:09:40.676 Multi-Domain Subsystem: Not Supported 00:09:40.676 Fixed Capacity Management: Not Supported 00:09:40.676 Variable Capacity Management: Not Supported 00:09:40.676 Delete Endurance Group: Not Supported 00:09:40.676 Delete NVM Set: Not Supported 00:09:40.676 Extended LBA Formats Supported: Not Supported 00:09:40.676 Flexible Data Placement Supported: Not Supported 00:09:40.676 00:09:40.676 Controller Memory Buffer Support 00:09:40.676 ================================ 00:09:40.676 Supported: No 00:09:40.676 00:09:40.676 Persistent Memory Region Support 00:09:40.676 ================================ 00:09:40.676 Supported: No 00:09:40.676 00:09:40.676 Admin Command Set Attributes 00:09:40.676 ============================ 00:09:40.676 Security Send/Receive: Not Supported 00:09:40.676 Format NVM: Not Supported 00:09:40.676 Firmware Activate/Download: Not Supported 00:09:40.676 Namespace Management: Not Supported 00:09:40.676 Device Self-Test: Not Supported 00:09:40.676 Directives: Not Supported 00:09:40.676 NVMe-MI: Not Supported 00:09:40.676 Virtualization Management: Not Supported 00:09:40.676 Doorbell Buffer Config: Not Supported 00:09:40.676 Get LBA Status Capability: Not Supported 00:09:40.676 Command & Feature Lockdown Capability: Not Supported 00:09:40.676 Abort Command Limit: 4 00:09:40.676 Async Event Request Limit: 4 00:09:40.676 Number of Firmware Slots: N/A 00:09:40.676 Firmware Slot 1 Read-Only: N/A 00:09:40.676 Firmware Activation Without Reset: N/A 00:09:40.676 Multiple Update Detection Support: N/A 00:09:40.676 Firmware Update Granularity: No Information Provided 00:09:40.676 Per-Namespace SMART Log: No 00:09:40.676 Asymmetric Namespace Access Log Page: Not Supported 00:09:40.676 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:09:40.676 Command Effects Log Page: Supported 00:09:40.676 Get Log Page Extended Data: Supported 00:09:40.676 Telemetry Log Pages: Not Supported 00:09:40.676 Persistent Event Log Pages: Not Supported 00:09:40.676 Supported Log Pages Log Page: May Support 00:09:40.676 Commands Supported & Effects Log Page: Not Supported 00:09:40.676 Feature Identifiers & Effects Log Page:May Support 00:09:40.676 NVMe-MI Commands & Effects Log Page: May Support 00:09:40.676 Data Area 4 for Telemetry Log: Not Supported 00:09:40.676 Error Log Page Entries Supported: 128 00:09:40.676 Keep Alive: Supported 00:09:40.676 Keep Alive Granularity: 10000 ms 00:09:40.676 00:09:40.676 NVM Command Set Attributes 00:09:40.676 ========================== 00:09:40.676 Submission Queue Entry Size 00:09:40.676 Max: 64 00:09:40.676 Min: 64 00:09:40.676 Completion Queue Entry Size 00:09:40.676 Max: 16 00:09:40.676 Min: 16 00:09:40.676 Number of Namespaces: 32 00:09:40.676 Compare Command: Supported 00:09:40.676 Write Uncorrectable Command: Not Supported 00:09:40.676 Dataset Management Command: Supported 00:09:40.676 Write Zeroes Command: Supported 00:09:40.676 Set Features Save Field: Not Supported 00:09:40.676 Reservations: Not Supported 00:09:40.676 Timestamp: Not Supported 00:09:40.676 Copy: Supported 00:09:40.676 Volatile Write Cache: Present 00:09:40.676 Atomic Write Unit (Normal): 1 00:09:40.676 Atomic Write Unit (PFail): 1 00:09:40.676 Atomic Compare & Write Unit: 1 00:09:40.676 Fused Compare & Write: Supported 00:09:40.676 Scatter-Gather List 00:09:40.676 SGL Command Set: Supported (Dword aligned) 00:09:40.676 SGL Keyed: Not Supported 00:09:40.676 SGL Bit Bucket Descriptor: Not Supported 00:09:40.676 SGL Metadata Pointer: Not Supported 00:09:40.676 Oversized SGL: Not Supported 00:09:40.676 SGL Metadata Address: Not Supported 00:09:40.676 SGL Offset: Not Supported 00:09:40.676 Transport SGL Data Block: Not Supported 00:09:40.676 Replay Protected Memory Block: Not Supported 00:09:40.676 00:09:40.676 Firmware Slot Information 00:09:40.676 ========================= 00:09:40.676 Active slot: 1 00:09:40.676 Slot 1 Firmware Revision: 24.05 00:09:40.676 00:09:40.676 00:09:40.676 Commands Supported and Effects 00:09:40.676 ============================== 00:09:40.676 Admin Commands 00:09:40.676 -------------- 00:09:40.676 Get Log Page (02h): Supported 00:09:40.676 Identify (06h): Supported 00:09:40.676 Abort (08h): Supported 00:09:40.676 Set Features (09h): Supported 00:09:40.676 Get Features (0Ah): Supported 00:09:40.676 Asynchronous Event Request (0Ch): Supported 00:09:40.676 Keep Alive (18h): Supported 00:09:40.676 I/O Commands 00:09:40.676 ------------ 00:09:40.676 Flush (00h): Supported LBA-Change 00:09:40.676 Write (01h): Supported LBA-Change 00:09:40.676 Read (02h): Supported 00:09:40.676 Compare (05h): Supported 00:09:40.676 Write Zeroes (08h): Supported LBA-Change 00:09:40.676 Dataset Management (09h): Supported LBA-Change 00:09:40.676 Copy (19h): Supported LBA-Change 00:09:40.676 Unknown (79h): Supported LBA-Change 00:09:40.676 Unknown (7Ah): Supported 00:09:40.676 00:09:40.676 Error Log 00:09:40.676 ========= 00:09:40.677 00:09:40.677 Arbitration 00:09:40.677 =========== 00:09:40.677 Arbitration Burst: 1 00:09:40.677 00:09:40.677 Power Management 00:09:40.677 ================ 00:09:40.677 Number of Power States: 1 00:09:40.677 Current Power State: Power State #0 00:09:40.677 Power State #0: 00:09:40.677 Max Power: 0.00 W 00:09:40.677 Non-Operational State: Operational 00:09:40.677 Entry Latency: Not Reported 00:09:40.677 Exit Latency: Not Reported 00:09:40.677 Relative Read Throughput: 0 00:09:40.677 Relative Read Latency: 0 00:09:40.677 Relative Write Throughput: 0 00:09:40.677 Relative Write Latency: 0 00:09:40.677 Idle Power: Not Reported 00:09:40.677 Active Power: Not Reported 00:09:40.677 Non-Operational Permissive Mode: Not Supported 00:09:40.677 00:09:40.677 Health Information 00:09:40.677 ================== 00:09:40.677 Critical Warnings: 00:09:40.677 Available Spare Space: OK 00:09:40.677 Temperature: OK 00:09:40.677 Device Reliability: OK 00:09:40.677 Read Only: No 00:09:40.677 Volatile Memory Backup: OK 00:09:40.677 Current Temperature: 0 Kelvin (-2[2024-04-18 16:56:56.230595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:40.677 [2024-04-18 16:56:56.238394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:40.677 [2024-04-18 16:56:56.238436] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:09:40.677 [2024-04-18 16:56:56.238453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.677 [2024-04-18 16:56:56.238464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.677 [2024-04-18 16:56:56.238475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.677 [2024-04-18 16:56:56.238485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.677 [2024-04-18 16:56:56.238551] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:40.677 [2024-04-18 16:56:56.238572] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:09:40.677 [2024-04-18 16:56:56.239557] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:40.677 [2024-04-18 16:56:56.239637] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:09:40.677 [2024-04-18 16:56:56.239657] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:09:40.677 [2024-04-18 16:56:56.240569] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:09:40.677 [2024-04-18 16:56:56.240594] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:09:40.677 [2024-04-18 16:56:56.240646] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:09:40.677 [2024-04-18 16:56:56.243393] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:40.677 73 Celsius) 00:09:40.677 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:40.677 Available Spare: 0% 00:09:40.677 Available Spare Threshold: 0% 00:09:40.677 Life Percentage Used: 0% 00:09:40.677 Data Units Read: 0 00:09:40.677 Data Units Written: 0 00:09:40.677 Host Read Commands: 0 00:09:40.677 Host Write Commands: 0 00:09:40.677 Controller Busy Time: 0 minutes 00:09:40.677 Power Cycles: 0 00:09:40.677 Power On Hours: 0 hours 00:09:40.677 Unsafe Shutdowns: 0 00:09:40.677 Unrecoverable Media Errors: 0 00:09:40.677 Lifetime Error Log Entries: 0 00:09:40.677 Warning Temperature Time: 0 minutes 00:09:40.677 Critical Temperature Time: 0 minutes 00:09:40.677 00:09:40.677 Number of Queues 00:09:40.677 ================ 00:09:40.677 Number of I/O Submission Queues: 127 00:09:40.677 Number of I/O Completion Queues: 127 00:09:40.677 00:09:40.677 Active Namespaces 00:09:40.677 ================= 00:09:40.677 Namespace ID:1 00:09:40.677 Error Recovery Timeout: Unlimited 00:09:40.677 Command Set Identifier: NVM (00h) 00:09:40.677 Deallocate: Supported 00:09:40.677 Deallocated/Unwritten Error: Not Supported 00:09:40.677 Deallocated Read Value: Unknown 00:09:40.677 Deallocate in Write Zeroes: Not Supported 00:09:40.677 Deallocated Guard Field: 0xFFFF 00:09:40.677 Flush: Supported 00:09:40.677 Reservation: Supported 00:09:40.677 Namespace Sharing Capabilities: Multiple Controllers 00:09:40.677 Size (in LBAs): 131072 (0GiB) 00:09:40.677 Capacity (in LBAs): 131072 (0GiB) 00:09:40.677 Utilization (in LBAs): 131072 (0GiB) 00:09:40.677 NGUID: 55194E22B1E749848CF188FD5205EBAE 00:09:40.677 UUID: 55194e22-b1e7-4984-8cf1-88fd5205ebae 00:09:40.677 Thin Provisioning: Not Supported 00:09:40.677 Per-NS Atomic Units: Yes 00:09:40.677 Atomic Boundary Size (Normal): 0 00:09:40.677 Atomic Boundary Size (PFail): 0 00:09:40.677 Atomic Boundary Offset: 0 00:09:40.677 Maximum Single Source Range Length: 65535 00:09:40.677 Maximum Copy Length: 65535 00:09:40.677 Maximum Source Range Count: 1 00:09:40.677 NGUID/EUI64 Never Reused: No 00:09:40.677 Namespace Write Protected: No 00:09:40.677 Number of LBA Formats: 1 00:09:40.677 Current LBA Format: LBA Format #00 00:09:40.677 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.677 00:09:40.677 16:56:56 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:40.677 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.937 [2024-04-18 16:56:56.470134] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:46.213 [2024-04-18 16:57:01.577725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:46.213 Initializing NVMe Controllers 00:09:46.213 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:46.213 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:09:46.213 Initialization complete. Launching workers. 00:09:46.213 ======================================================== 00:09:46.213 Latency(us) 00:09:46.213 Device Information : IOPS MiB/s Average min max 00:09:46.213 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34408.41 134.41 3719.47 1191.37 7409.57 00:09:46.213 ======================================================== 00:09:46.213 Total : 34408.41 134.41 3719.47 1191.37 7409.57 00:09:46.213 00:09:46.213 16:57:01 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:46.213 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.213 [2024-04-18 16:57:01.809417] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:51.517 [2024-04-18 16:57:06.834656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:51.517 Initializing NVMe Controllers 00:09:51.517 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:51.517 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:09:51.517 Initialization complete. Launching workers. 00:09:51.517 ======================================================== 00:09:51.517 Latency(us) 00:09:51.517 Device Information : IOPS MiB/s Average min max 00:09:51.517 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31600.36 123.44 4049.70 1229.92 9873.41 00:09:51.517 ======================================================== 00:09:51.517 Total : 31600.36 123.44 4049.70 1229.92 9873.41 00:09:51.517 00:09:51.517 16:57:06 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:51.517 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.517 [2024-04-18 16:57:07.046724] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:56.790 [2024-04-18 16:57:12.188532] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:56.790 Initializing NVMe Controllers 00:09:56.790 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:56.790 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:56.790 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:09:56.790 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:09:56.790 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:09:56.790 Initialization complete. Launching workers. 00:09:56.790 Starting thread on core 2 00:09:56.790 Starting thread on core 3 00:09:56.790 Starting thread on core 1 00:09:56.790 16:57:12 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:09:56.790 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.790 [2024-04-18 16:57:12.492920] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:00.082 [2024-04-18 16:57:15.556089] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:00.082 Initializing NVMe Controllers 00:10:00.082 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:00.082 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:00.082 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:00.082 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:00.082 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:00.082 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:00.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:00.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:00.082 Initialization complete. Launching workers. 00:10:00.082 Starting thread on core 1 with urgent priority queue 00:10:00.082 Starting thread on core 2 with urgent priority queue 00:10:00.082 Starting thread on core 3 with urgent priority queue 00:10:00.082 Starting thread on core 0 with urgent priority queue 00:10:00.082 SPDK bdev Controller (SPDK2 ) core 0: 5229.67 IO/s 19.12 secs/100000 ios 00:10:00.082 SPDK bdev Controller (SPDK2 ) core 1: 5417.33 IO/s 18.46 secs/100000 ios 00:10:00.082 SPDK bdev Controller (SPDK2 ) core 2: 5327.00 IO/s 18.77 secs/100000 ios 00:10:00.082 SPDK bdev Controller (SPDK2 ) core 3: 4330.67 IO/s 23.09 secs/100000 ios 00:10:00.082 ======================================================== 00:10:00.082 00:10:00.082 16:57:15 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:00.082 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.339 [2024-04-18 16:57:15.854841] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:00.339 [2024-04-18 16:57:15.864911] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:00.339 Initializing NVMe Controllers 00:10:00.339 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:00.339 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:00.339 Namespace ID: 1 size: 0GB 00:10:00.339 Initialization complete. 00:10:00.339 INFO: using host memory buffer for IO 00:10:00.339 Hello world! 00:10:00.339 16:57:15 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:00.339 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.598 [2024-04-18 16:57:16.151977] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:01.535 Initializing NVMe Controllers 00:10:01.535 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:01.535 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:01.535 Initialization complete. Launching workers. 00:10:01.535 submit (in ns) avg, min, max = 7400.5, 3474.4, 4014907.8 00:10:01.535 complete (in ns) avg, min, max = 23960.2, 2043.3, 4015247.8 00:10:01.535 00:10:01.535 Submit histogram 00:10:01.535 ================ 00:10:01.535 Range in us Cumulative Count 00:10:01.535 3.461 - 3.484: 0.0220% ( 3) 00:10:01.535 3.484 - 3.508: 0.4332% ( 56) 00:10:01.535 3.508 - 3.532: 1.7696% ( 182) 00:10:01.535 3.532 - 3.556: 5.0738% ( 450) 00:10:01.535 3.556 - 3.579: 11.2123% ( 836) 00:10:01.535 3.579 - 3.603: 19.6490% ( 1149) 00:10:01.535 3.603 - 3.627: 30.3179% ( 1453) 00:10:01.535 3.627 - 3.650: 40.8767% ( 1438) 00:10:01.535 3.650 - 3.674: 49.0565% ( 1114) 00:10:01.536 3.674 - 3.698: 55.5180% ( 880) 00:10:01.536 3.698 - 3.721: 60.8635% ( 728) 00:10:01.536 3.721 - 3.745: 65.7244% ( 662) 00:10:01.536 3.745 - 3.769: 69.1314% ( 464) 00:10:01.536 3.769 - 3.793: 72.8174% ( 502) 00:10:01.536 3.793 - 3.816: 75.4020% ( 352) 00:10:01.536 3.816 - 3.840: 78.1629% ( 376) 00:10:01.536 3.840 - 3.864: 81.8269% ( 499) 00:10:01.536 3.864 - 3.887: 85.0576% ( 440) 00:10:01.536 3.887 - 3.911: 87.3412% ( 311) 00:10:01.536 3.911 - 3.935: 89.2650% ( 262) 00:10:01.536 3.935 - 3.959: 90.7556% ( 203) 00:10:01.536 3.959 - 3.982: 92.2241% ( 200) 00:10:01.536 3.982 - 4.006: 93.4650% ( 169) 00:10:01.536 4.006 - 4.030: 94.2947% ( 113) 00:10:01.536 4.030 - 4.053: 94.8895% ( 81) 00:10:01.536 4.053 - 4.077: 95.5356% ( 88) 00:10:01.536 4.077 - 4.101: 95.9762% ( 60) 00:10:01.536 4.101 - 4.124: 96.3140% ( 46) 00:10:01.536 4.124 - 4.148: 96.6224% ( 42) 00:10:01.536 4.148 - 4.172: 96.8500% ( 31) 00:10:01.536 4.172 - 4.196: 96.9822% ( 18) 00:10:01.536 4.196 - 4.219: 97.1290% ( 20) 00:10:01.536 4.219 - 4.243: 97.1951% ( 9) 00:10:01.536 4.243 - 4.267: 97.2832% ( 12) 00:10:01.536 4.267 - 4.290: 97.3493% ( 9) 00:10:01.536 4.290 - 4.314: 97.4154% ( 9) 00:10:01.536 4.314 - 4.338: 97.4521% ( 5) 00:10:01.536 4.338 - 4.361: 97.5402% ( 12) 00:10:01.536 4.361 - 4.385: 97.6063% ( 9) 00:10:01.536 4.385 - 4.409: 97.6136% ( 1) 00:10:01.536 4.409 - 4.433: 97.6357% ( 3) 00:10:01.536 4.433 - 4.456: 97.6430% ( 1) 00:10:01.536 4.480 - 4.504: 97.6503% ( 1) 00:10:01.536 4.527 - 4.551: 97.6724% ( 3) 00:10:01.536 4.551 - 4.575: 97.6797% ( 1) 00:10:01.536 4.575 - 4.599: 97.7017% ( 3) 00:10:01.536 4.599 - 4.622: 97.7238% ( 3) 00:10:01.536 4.622 - 4.646: 97.7752% ( 7) 00:10:01.536 4.646 - 4.670: 97.8266% ( 7) 00:10:01.536 4.670 - 4.693: 97.8706% ( 6) 00:10:01.536 4.693 - 4.717: 97.9147% ( 6) 00:10:01.536 4.717 - 4.741: 97.9367% ( 3) 00:10:01.536 4.741 - 4.764: 98.0542% ( 16) 00:10:01.536 4.764 - 4.788: 98.1203% ( 9) 00:10:01.536 4.788 - 4.812: 98.1643% ( 6) 00:10:01.536 4.812 - 4.836: 98.2010% ( 5) 00:10:01.536 4.836 - 4.859: 98.2157% ( 2) 00:10:01.536 4.859 - 4.883: 98.2892% ( 10) 00:10:01.536 4.883 - 4.907: 98.3038% ( 2) 00:10:01.536 4.907 - 4.930: 98.3479% ( 6) 00:10:01.536 4.930 - 4.954: 98.3993% ( 7) 00:10:01.536 4.954 - 4.978: 98.4213% ( 3) 00:10:01.536 4.978 - 5.001: 98.4434% ( 3) 00:10:01.536 5.025 - 5.049: 98.4580% ( 2) 00:10:01.536 5.049 - 5.073: 98.4727% ( 2) 00:10:01.536 5.073 - 5.096: 98.4801% ( 1) 00:10:01.536 5.096 - 5.120: 98.4874% ( 1) 00:10:01.536 5.120 - 5.144: 98.4947% ( 1) 00:10:01.536 5.167 - 5.191: 98.5021% ( 1) 00:10:01.536 5.191 - 5.215: 98.5094% ( 1) 00:10:01.536 5.239 - 5.262: 98.5241% ( 2) 00:10:01.536 5.333 - 5.357: 98.5388% ( 2) 00:10:01.536 5.428 - 5.452: 98.5461% ( 1) 00:10:01.536 5.499 - 5.523: 98.5535% ( 1) 00:10:01.536 5.665 - 5.689: 98.5608% ( 1) 00:10:01.536 5.736 - 5.760: 98.5682% ( 1) 00:10:01.536 5.784 - 5.807: 98.5755% ( 1) 00:10:01.536 5.879 - 5.902: 98.5829% ( 1) 00:10:01.536 6.116 - 6.163: 98.5902% ( 1) 00:10:01.536 6.163 - 6.210: 98.5975% ( 1) 00:10:01.536 6.258 - 6.305: 98.6122% ( 2) 00:10:01.536 6.305 - 6.353: 98.6196% ( 1) 00:10:01.536 6.637 - 6.684: 98.6269% ( 1) 00:10:01.536 6.684 - 6.732: 98.6343% ( 1) 00:10:01.536 6.732 - 6.779: 98.6416% ( 1) 00:10:01.536 6.779 - 6.827: 98.6489% ( 1) 00:10:01.536 6.921 - 6.969: 98.6563% ( 1) 00:10:01.536 6.969 - 7.016: 98.6636% ( 1) 00:10:01.536 7.016 - 7.064: 98.6710% ( 1) 00:10:01.536 7.064 - 7.111: 98.6783% ( 1) 00:10:01.536 7.111 - 7.159: 98.6857% ( 1) 00:10:01.536 7.159 - 7.206: 98.7003% ( 2) 00:10:01.536 7.301 - 7.348: 98.7077% ( 1) 00:10:01.536 7.396 - 7.443: 98.7150% ( 1) 00:10:01.536 7.443 - 7.490: 98.7224% ( 1) 00:10:01.536 7.490 - 7.538: 98.7297% ( 1) 00:10:01.536 7.538 - 7.585: 98.7371% ( 1) 00:10:01.536 7.633 - 7.680: 98.7517% ( 2) 00:10:01.536 7.727 - 7.775: 98.7591% ( 1) 00:10:01.536 7.822 - 7.870: 98.7738% ( 2) 00:10:01.536 7.870 - 7.917: 98.7811% ( 1) 00:10:01.536 7.917 - 7.964: 98.7958% ( 2) 00:10:01.536 7.964 - 8.012: 98.8031% ( 1) 00:10:01.536 8.059 - 8.107: 98.8105% ( 1) 00:10:01.536 8.107 - 8.154: 98.8252% ( 2) 00:10:01.536 8.296 - 8.344: 98.8399% ( 2) 00:10:01.536 8.391 - 8.439: 98.8692% ( 4) 00:10:01.536 8.533 - 8.581: 98.8839% ( 2) 00:10:01.536 8.628 - 8.676: 98.8986% ( 2) 00:10:01.536 9.007 - 9.055: 98.9059% ( 1) 00:10:01.536 9.055 - 9.102: 98.9133% ( 1) 00:10:01.536 9.150 - 9.197: 98.9206% ( 1) 00:10:01.536 9.197 - 9.244: 98.9280% ( 1) 00:10:01.536 9.244 - 9.292: 98.9353% ( 1) 00:10:01.536 9.339 - 9.387: 98.9427% ( 1) 00:10:01.536 9.529 - 9.576: 98.9573% ( 2) 00:10:01.536 9.576 - 9.624: 98.9720% ( 2) 00:10:01.536 9.956 - 10.003: 98.9794% ( 1) 00:10:01.536 10.098 - 10.145: 98.9867% ( 1) 00:10:01.536 10.572 - 10.619: 98.9941% ( 1) 00:10:01.536 10.761 - 10.809: 99.0014% ( 1) 00:10:01.536 10.809 - 10.856: 99.0087% ( 1) 00:10:01.536 10.951 - 10.999: 99.0161% ( 1) 00:10:01.536 10.999 - 11.046: 99.0308% ( 2) 00:10:01.536 11.283 - 11.330: 99.0381% ( 1) 00:10:01.536 11.567 - 11.615: 99.0455% ( 1) 00:10:01.536 11.757 - 11.804: 99.0528% ( 1) 00:10:01.536 11.804 - 11.852: 99.0601% ( 1) 00:10:01.536 11.852 - 11.899: 99.0675% ( 1) 00:10:01.536 12.231 - 12.326: 99.0822% ( 2) 00:10:01.536 12.895 - 12.990: 99.0968% ( 2) 00:10:01.536 13.179 - 13.274: 99.1042% ( 1) 00:10:01.536 13.464 - 13.559: 99.1115% ( 1) 00:10:01.536 13.748 - 13.843: 99.1189% ( 1) 00:10:01.536 13.938 - 14.033: 99.1262% ( 1) 00:10:01.536 14.222 - 14.317: 99.1336% ( 1) 00:10:01.536 14.507 - 14.601: 99.1409% ( 1) 00:10:01.536 14.696 - 14.791: 99.1482% ( 1) 00:10:01.536 14.791 - 14.886: 99.1556% ( 1) 00:10:01.536 14.886 - 14.981: 99.1629% ( 1) 00:10:01.536 17.161 - 17.256: 99.1776% ( 2) 00:10:01.536 17.256 - 17.351: 99.1923% ( 2) 00:10:01.536 17.351 - 17.446: 99.1996% ( 1) 00:10:01.536 17.446 - 17.541: 99.2290% ( 4) 00:10:01.536 17.541 - 17.636: 99.2804% ( 7) 00:10:01.536 17.636 - 17.730: 99.3465% ( 9) 00:10:01.536 17.730 - 17.825: 99.3979% ( 7) 00:10:01.536 17.825 - 17.920: 99.4273% ( 4) 00:10:01.536 17.920 - 18.015: 99.4787% ( 7) 00:10:01.536 18.015 - 18.110: 99.5080% ( 4) 00:10:01.536 18.110 - 18.204: 99.5668% ( 8) 00:10:01.536 18.204 - 18.299: 99.5741% ( 1) 00:10:01.536 18.299 - 18.394: 99.6108% ( 5) 00:10:01.536 18.394 - 18.489: 99.6622% ( 7) 00:10:01.536 18.584 - 18.679: 99.6843% ( 3) 00:10:01.536 18.679 - 18.773: 99.7063% ( 3) 00:10:01.536 18.773 - 18.868: 99.7650% ( 8) 00:10:01.536 18.868 - 18.963: 99.7797% ( 2) 00:10:01.536 18.963 - 19.058: 99.8017% ( 3) 00:10:01.536 19.058 - 19.153: 99.8091% ( 1) 00:10:01.536 19.153 - 19.247: 99.8164% ( 1) 00:10:01.536 19.247 - 19.342: 99.8311% ( 2) 00:10:01.536 20.196 - 20.290: 99.8385% ( 1) 00:10:01.536 20.764 - 20.859: 99.8458% ( 1) 00:10:01.536 21.239 - 21.333: 99.8531% ( 1) 00:10:01.536 22.566 - 22.661: 99.8605% ( 1) 00:10:01.794 22.756 - 22.850: 99.8678% ( 1) 00:10:01.794 27.117 - 27.307: 99.8752% ( 1) 00:10:01.794 28.444 - 28.634: 99.8899% ( 2) 00:10:01.794 32.237 - 32.427: 99.8972% ( 1) 00:10:01.794 34.323 - 34.513: 99.9045% ( 1) 00:10:01.794 40.960 - 41.150: 99.9119% ( 1) 00:10:01.794 3980.705 - 4004.978: 99.9780% ( 9) 00:10:01.794 4004.978 - 4029.250: 100.0000% ( 3) 00:10:01.794 00:10:01.794 Complete histogram 00:10:01.794 ================== 00:10:01.794 Range in us Cumulative Count 00:10:01.794 2.039 - 2.050: 0.5287% ( 72) 00:10:01.794 2.050 - 2.062: 9.2812% ( 1192) 00:10:01.795 2.062 - 2.074: 14.3109% ( 685) 00:10:01.795 2.074 - 2.086: 21.9253% ( 1037) 00:10:01.795 2.086 - 2.098: 48.6453% ( 3639) 00:10:01.795 2.098 - 2.110: 59.1306% ( 1428) 00:10:01.795 2.110 - 2.121: 63.0002% ( 527) 00:10:01.795 2.121 - 2.133: 67.0020% ( 545) 00:10:01.795 2.133 - 2.145: 68.2135% ( 165) 00:10:01.795 2.145 - 2.157: 73.6324% ( 738) 00:10:01.795 2.157 - 2.169: 84.5290% ( 1484) 00:10:01.795 2.169 - 2.181: 87.9947% ( 472) 00:10:01.795 2.181 - 2.193: 89.1916% ( 163) 00:10:01.795 2.193 - 2.204: 90.2709% ( 147) 00:10:01.795 2.204 - 2.216: 91.0419% ( 105) 00:10:01.795 2.216 - 2.228: 92.0112% ( 132) 00:10:01.795 2.228 - 2.240: 93.8542% ( 251) 00:10:01.795 2.240 - 2.252: 94.8748% ( 139) 00:10:01.795 2.252 - 2.264: 95.1979% ( 44) 00:10:01.795 2.264 - 2.276: 95.3741% ( 24) 00:10:01.795 2.276 - 2.287: 95.5577% ( 25) 00:10:01.795 2.287 - 2.299: 95.6825% ( 17) 00:10:01.795 2.299 - 2.311: 95.8220% ( 19) 00:10:01.795 2.311 - 2.323: 96.0203% ( 27) 00:10:01.795 2.323 - 2.335: 96.1818% ( 22) 00:10:01.795 2.335 - 2.347: 96.3213% ( 19) 00:10:01.795 2.347 - 2.359: 96.5343% ( 29) 00:10:01.795 2.359 - 2.370: 96.8206% ( 39) 00:10:01.795 2.370 - 2.382: 97.2465% ( 58) 00:10:01.795 2.382 - 2.394: 97.6063% ( 49) 00:10:01.795 2.394 - 2.406: 97.8045% ( 27) 00:10:01.795 2.406 - 2.418: 97.9881% ( 25) 00:10:01.795 2.418 - 2.430: 98.0762% ( 12) 00:10:01.795 2.430 - 2.441: 98.1570% ( 11) 00:10:01.795 2.441 - 2.453: 98.2231% ( 9) 00:10:01.795 2.453 - 2.465: 98.2892% ( 9) 00:10:01.795 2.465 - 2.477: 98.3626% ( 10) 00:10:01.795 2.477 - 2.489: 98.3773% ( 2) 00:10:01.795 2.489 - 2.501: 98.4066% ( 4) 00:10:01.795 2.501 - 2.513: 98.4140% ( 1) 00:10:01.795 2.513 - 2.524: 98.4213% ( 1) 00:10:01.795 2.524 - 2.536: 98.4434% ( 3) 00:10:01.795 2.560 - 2.572: 98.4507% ( 1) 00:10:01.795 2.584 - 2.596: 98.4580% ( 1) 00:10:01.795 2.607 - 2.619: 98.4654% ( 1) 00:10:01.795 2.679 - 2.690: 98.4801% ( 2) 00:10:01.795 2.690 - 2.702: 98.4874% ( 1) 00:10:01.795 2.714 - 2.726: 98.4947% ( 1) 00:10:01.795 2.738 - 2.750: 98.5021% ( 1) 00:10:01.795 2.750 - 2.761: 98.5094% ( 1) 00:10:01.795 2.868 - 2.880: 98.5168% ( 1) 00:10:01.795 2.999 - 3.010: 98.5241% ( 1) 00:10:01.795 3.224 - 3.247: 98.5388% ( 2) 00:10:01.795 3.295 - 3.319: 98.5461% ( 1) 00:10:01.795 3.342 - 3.366: 98.5682% ( 3) 00:10:01.795 3.390 - 3.413: 98.5755% ( 1) 00:10:01.795 3.413 - 3.437: 98.5829% ( 1) 00:10:01.795 3.437 - 3.461: 98.6049% ( 3) 00:10:01.795 3.461 - 3.484: 98.6269% ( 3) 00:10:01.795 3.484 - 3.508: 98.6489% ( 3) 00:10:01.795 3.508 - 3.532: 98.6710% ( 3) 00:10:01.795 3.532 - 3.556: 98.6857% ( 2) 00:10:01.795 3.556 - 3.579: 98.6930% ( 1) 00:10:01.795 3.579 - 3.603: 98.7003% ( 1) 00:10:01.795 3.603 - 3.627: 98.7077% ( 1) 00:10:01.795 3.627 - 3.650: 98.7224% ( 2) 00:10:01.795 3.745 - 3.769: 98.7297% ( 1) 00:10:01.795 3.769 - 3.793: 98.7371% ( 1) 00:10:01.795 3.793 - 3.816: 98.7444% ( 1) 00:10:01.795 3.816 - 3.840: 98.7517% ( 1) 00:10:01.795 3.840 - 3.864: 98.7591% ( 1) 00:10:01.795 3.887 - 3.911: 98.7664% ( 1) 00:10:01.795 3.911 - 3.935: 98.7738% ( 1) 00:10:01.795 4.741 - 4.764: 98.7811% ( 1) 00:10:01.795 5.001 - 5.025: 98.7885% ( 1) 00:10:01.795 5.997 - 6.021: 98.7958% ( 1) 00:10:01.795 6.068 - 6.116: 98.8031% ( 1) 00:10:01.795 6.210 - 6.258: 9[2024-04-18 16:57:17.247295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:01.795 8.8105% ( 1) 00:10:01.795 6.353 - 6.400: 98.8178% ( 1) 00:10:01.795 6.447 - 6.495: 98.8252% ( 1) 00:10:01.795 6.637 - 6.684: 98.8325% ( 1) 00:10:01.795 6.684 - 6.732: 98.8399% ( 1) 00:10:01.795 6.827 - 6.874: 98.8472% ( 1) 00:10:01.795 7.206 - 7.253: 98.8545% ( 1) 00:10:01.795 7.253 - 7.301: 98.8619% ( 1) 00:10:01.795 7.490 - 7.538: 98.8692% ( 1) 00:10:01.795 8.486 - 8.533: 98.8766% ( 1) 00:10:01.795 8.628 - 8.676: 98.8839% ( 1) 00:10:01.795 8.676 - 8.723: 98.8913% ( 1) 00:10:01.795 15.455 - 15.550: 98.8986% ( 1) 00:10:01.795 15.550 - 15.644: 98.9059% ( 1) 00:10:01.795 15.644 - 15.739: 98.9280% ( 3) 00:10:01.795 15.739 - 15.834: 98.9353% ( 1) 00:10:01.795 15.834 - 15.929: 98.9573% ( 3) 00:10:01.795 15.929 - 16.024: 98.9941% ( 5) 00:10:01.795 16.024 - 16.119: 99.0308% ( 5) 00:10:01.795 16.119 - 16.213: 99.0455% ( 2) 00:10:01.795 16.213 - 16.308: 99.0748% ( 4) 00:10:01.795 16.308 - 16.403: 99.0968% ( 3) 00:10:01.795 16.403 - 16.498: 99.1115% ( 2) 00:10:01.795 16.498 - 16.593: 99.1556% ( 6) 00:10:01.795 16.593 - 16.687: 99.1923% ( 5) 00:10:01.795 16.687 - 16.782: 99.2217% ( 4) 00:10:01.795 16.782 - 16.877: 99.2731% ( 7) 00:10:01.795 16.877 - 16.972: 99.2804% ( 1) 00:10:01.795 16.972 - 17.067: 99.3098% ( 4) 00:10:01.795 17.067 - 17.161: 99.3171% ( 1) 00:10:01.795 17.161 - 17.256: 99.3392% ( 3) 00:10:01.795 17.256 - 17.351: 99.3612% ( 3) 00:10:01.795 17.351 - 17.446: 99.3685% ( 1) 00:10:01.795 17.446 - 17.541: 99.3759% ( 1) 00:10:01.795 17.541 - 17.636: 99.3832% ( 1) 00:10:01.795 17.825 - 17.920: 99.3979% ( 2) 00:10:01.795 18.679 - 18.773: 99.4052% ( 1) 00:10:01.795 18.868 - 18.963: 99.4126% ( 1) 00:10:01.795 18.963 - 19.058: 99.4199% ( 1) 00:10:01.795 23.324 - 23.419: 99.4273% ( 1) 00:10:01.795 31.289 - 31.479: 99.4346% ( 1) 00:10:01.795 35.840 - 36.030: 99.4420% ( 1) 00:10:01.795 53.096 - 53.476: 99.4493% ( 1) 00:10:01.795 166.874 - 167.633: 99.4566% ( 1) 00:10:01.795 3980.705 - 4004.978: 99.9413% ( 66) 00:10:01.795 4004.978 - 4029.250: 100.0000% ( 8) 00:10:01.795 00:10:01.795 16:57:17 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:01.795 16:57:17 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:01.795 16:57:17 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:01.795 16:57:17 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:01.795 16:57:17 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:02.053 [ 00:10:02.053 { 00:10:02.053 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:02.053 "subtype": "Discovery", 00:10:02.053 "listen_addresses": [], 00:10:02.053 "allow_any_host": true, 00:10:02.053 "hosts": [] 00:10:02.053 }, 00:10:02.053 { 00:10:02.053 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:02.053 "subtype": "NVMe", 00:10:02.053 "listen_addresses": [ 00:10:02.053 { 00:10:02.053 "transport": "VFIOUSER", 00:10:02.053 "trtype": "VFIOUSER", 00:10:02.053 "adrfam": "IPv4", 00:10:02.053 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:02.053 "trsvcid": "0" 00:10:02.053 } 00:10:02.053 ], 00:10:02.053 "allow_any_host": true, 00:10:02.053 "hosts": [], 00:10:02.053 "serial_number": "SPDK1", 00:10:02.053 "model_number": "SPDK bdev Controller", 00:10:02.053 "max_namespaces": 32, 00:10:02.053 "min_cntlid": 1, 00:10:02.053 "max_cntlid": 65519, 00:10:02.053 "namespaces": [ 00:10:02.053 { 00:10:02.053 "nsid": 1, 00:10:02.053 "bdev_name": "Malloc1", 00:10:02.053 "name": "Malloc1", 00:10:02.053 "nguid": "CAD99B0C67724743B8690B0C952D4454", 00:10:02.053 "uuid": "cad99b0c-6772-4743-b869-0b0c952d4454" 00:10:02.053 }, 00:10:02.053 { 00:10:02.053 "nsid": 2, 00:10:02.053 "bdev_name": "Malloc3", 00:10:02.053 "name": "Malloc3", 00:10:02.053 "nguid": "03004E8B4EBA4E6FADA2DF06E1FCA481", 00:10:02.053 "uuid": "03004e8b-4eba-4e6f-ada2-df06e1fca481" 00:10:02.053 } 00:10:02.053 ] 00:10:02.053 }, 00:10:02.053 { 00:10:02.053 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:02.053 "subtype": "NVMe", 00:10:02.053 "listen_addresses": [ 00:10:02.053 { 00:10:02.053 "transport": "VFIOUSER", 00:10:02.053 "trtype": "VFIOUSER", 00:10:02.053 "adrfam": "IPv4", 00:10:02.053 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:02.053 "trsvcid": "0" 00:10:02.053 } 00:10:02.053 ], 00:10:02.053 "allow_any_host": true, 00:10:02.053 "hosts": [], 00:10:02.053 "serial_number": "SPDK2", 00:10:02.053 "model_number": "SPDK bdev Controller", 00:10:02.053 "max_namespaces": 32, 00:10:02.053 "min_cntlid": 1, 00:10:02.053 "max_cntlid": 65519, 00:10:02.053 "namespaces": [ 00:10:02.053 { 00:10:02.053 "nsid": 1, 00:10:02.053 "bdev_name": "Malloc2", 00:10:02.053 "name": "Malloc2", 00:10:02.053 "nguid": "55194E22B1E749848CF188FD5205EBAE", 00:10:02.053 "uuid": "55194e22-b1e7-4984-8cf1-88fd5205ebae" 00:10:02.053 } 00:10:02.053 ] 00:10:02.053 } 00:10:02.053 ] 00:10:02.053 16:57:17 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:02.053 16:57:17 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1640266 00:10:02.053 16:57:17 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:02.053 16:57:17 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:02.053 16:57:17 -- common/autotest_common.sh@1251 -- # local i=0 00:10:02.053 16:57:17 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:02.054 16:57:17 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:02.054 16:57:17 -- common/autotest_common.sh@1262 -- # return 0 00:10:02.054 16:57:17 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:02.054 16:57:17 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:02.054 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.054 [2024-04-18 16:57:17.707853] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:02.311 Malloc4 00:10:02.311 16:57:17 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:02.568 [2024-04-18 16:57:18.038258] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:02.568 16:57:18 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:02.568 Asynchronous Event Request test 00:10:02.568 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:02.568 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:02.568 Registering asynchronous event callbacks... 00:10:02.568 Starting namespace attribute notice tests for all controllers... 00:10:02.568 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:02.568 aer_cb - Changed Namespace 00:10:02.568 Cleaning up... 00:10:02.827 [ 00:10:02.827 { 00:10:02.827 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:02.827 "subtype": "Discovery", 00:10:02.827 "listen_addresses": [], 00:10:02.827 "allow_any_host": true, 00:10:02.827 "hosts": [] 00:10:02.827 }, 00:10:02.827 { 00:10:02.827 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:02.827 "subtype": "NVMe", 00:10:02.827 "listen_addresses": [ 00:10:02.827 { 00:10:02.827 "transport": "VFIOUSER", 00:10:02.827 "trtype": "VFIOUSER", 00:10:02.827 "adrfam": "IPv4", 00:10:02.827 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:02.827 "trsvcid": "0" 00:10:02.827 } 00:10:02.827 ], 00:10:02.827 "allow_any_host": true, 00:10:02.827 "hosts": [], 00:10:02.827 "serial_number": "SPDK1", 00:10:02.827 "model_number": "SPDK bdev Controller", 00:10:02.827 "max_namespaces": 32, 00:10:02.827 "min_cntlid": 1, 00:10:02.827 "max_cntlid": 65519, 00:10:02.827 "namespaces": [ 00:10:02.827 { 00:10:02.827 "nsid": 1, 00:10:02.827 "bdev_name": "Malloc1", 00:10:02.827 "name": "Malloc1", 00:10:02.827 "nguid": "CAD99B0C67724743B8690B0C952D4454", 00:10:02.827 "uuid": "cad99b0c-6772-4743-b869-0b0c952d4454" 00:10:02.827 }, 00:10:02.827 { 00:10:02.827 "nsid": 2, 00:10:02.827 "bdev_name": "Malloc3", 00:10:02.827 "name": "Malloc3", 00:10:02.827 "nguid": "03004E8B4EBA4E6FADA2DF06E1FCA481", 00:10:02.827 "uuid": "03004e8b-4eba-4e6f-ada2-df06e1fca481" 00:10:02.827 } 00:10:02.827 ] 00:10:02.827 }, 00:10:02.827 { 00:10:02.827 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:02.827 "subtype": "NVMe", 00:10:02.827 "listen_addresses": [ 00:10:02.827 { 00:10:02.827 "transport": "VFIOUSER", 00:10:02.827 "trtype": "VFIOUSER", 00:10:02.827 "adrfam": "IPv4", 00:10:02.827 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:02.827 "trsvcid": "0" 00:10:02.827 } 00:10:02.827 ], 00:10:02.827 "allow_any_host": true, 00:10:02.827 "hosts": [], 00:10:02.827 "serial_number": "SPDK2", 00:10:02.827 "model_number": "SPDK bdev Controller", 00:10:02.827 "max_namespaces": 32, 00:10:02.827 "min_cntlid": 1, 00:10:02.827 "max_cntlid": 65519, 00:10:02.827 "namespaces": [ 00:10:02.827 { 00:10:02.827 "nsid": 1, 00:10:02.827 "bdev_name": "Malloc2", 00:10:02.827 "name": "Malloc2", 00:10:02.827 "nguid": "55194E22B1E749848CF188FD5205EBAE", 00:10:02.827 "uuid": "55194e22-b1e7-4984-8cf1-88fd5205ebae" 00:10:02.827 }, 00:10:02.827 { 00:10:02.827 "nsid": 2, 00:10:02.827 "bdev_name": "Malloc4", 00:10:02.827 "name": "Malloc4", 00:10:02.827 "nguid": "D95C545B68FE46DEAF81EF40AF7C7E62", 00:10:02.827 "uuid": "d95c545b-68fe-46de-af81-ef40af7c7e62" 00:10:02.827 } 00:10:02.827 ] 00:10:02.827 } 00:10:02.827 ] 00:10:02.827 16:57:18 -- target/nvmf_vfio_user.sh@44 -- # wait 1640266 00:10:02.827 16:57:18 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:02.827 16:57:18 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1633910 00:10:02.827 16:57:18 -- common/autotest_common.sh@936 -- # '[' -z 1633910 ']' 00:10:02.827 16:57:18 -- common/autotest_common.sh@940 -- # kill -0 1633910 00:10:02.827 16:57:18 -- common/autotest_common.sh@941 -- # uname 00:10:02.827 16:57:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:02.827 16:57:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1633910 00:10:02.827 16:57:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:02.827 16:57:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:02.827 16:57:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1633910' 00:10:02.827 killing process with pid 1633910 00:10:02.827 16:57:18 -- common/autotest_common.sh@955 -- # kill 1633910 00:10:02.827 [2024-04-18 16:57:18.322981] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:10:02.827 16:57:18 -- common/autotest_common.sh@960 -- # wait 1633910 00:10:03.087 16:57:18 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:03.087 16:57:18 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:03.087 16:57:18 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:03.087 16:57:18 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:03.087 16:57:18 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:03.087 16:57:18 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1640409 00:10:03.087 16:57:18 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:03.087 16:57:18 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1640409' 00:10:03.087 Process pid: 1640409 00:10:03.087 16:57:18 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:03.087 16:57:18 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1640409 00:10:03.087 16:57:18 -- common/autotest_common.sh@817 -- # '[' -z 1640409 ']' 00:10:03.087 16:57:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.087 16:57:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:03.087 16:57:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.087 16:57:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:03.087 16:57:18 -- common/autotest_common.sh@10 -- # set +x 00:10:03.087 [2024-04-18 16:57:18.723802] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:03.087 [2024-04-18 16:57:18.724834] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:10:03.087 [2024-04-18 16:57:18.724889] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.087 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.087 [2024-04-18 16:57:18.790189] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.347 [2024-04-18 16:57:18.911255] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.347 [2024-04-18 16:57:18.911318] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.347 [2024-04-18 16:57:18.911340] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.347 [2024-04-18 16:57:18.911357] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.347 [2024-04-18 16:57:18.911372] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.347 [2024-04-18 16:57:18.911467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.347 [2024-04-18 16:57:18.911527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.347 [2024-04-18 16:57:18.911643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.347 [2024-04-18 16:57:18.911651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.347 [2024-04-18 16:57:19.020793] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:10:03.347 [2024-04-18 16:57:19.021047] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:10:03.347 [2024-04-18 16:57:19.021322] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:10:03.347 [2024-04-18 16:57:19.022139] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:03.347 [2024-04-18 16:57:19.022274] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:10:03.347 16:57:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:03.347 16:57:19 -- common/autotest_common.sh@850 -- # return 0 00:10:03.347 16:57:19 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:04.722 16:57:20 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:04.722 16:57:20 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:04.722 16:57:20 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:04.722 16:57:20 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:04.722 16:57:20 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:04.722 16:57:20 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:04.980 Malloc1 00:10:04.980 16:57:20 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:05.238 16:57:20 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:05.495 16:57:21 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:05.753 16:57:21 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:05.753 16:57:21 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:05.753 16:57:21 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:06.010 Malloc2 00:10:06.010 16:57:21 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:06.268 16:57:21 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:06.538 16:57:22 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:06.795 16:57:22 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:06.795 16:57:22 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1640409 00:10:06.795 16:57:22 -- common/autotest_common.sh@936 -- # '[' -z 1640409 ']' 00:10:06.795 16:57:22 -- common/autotest_common.sh@940 -- # kill -0 1640409 00:10:06.795 16:57:22 -- common/autotest_common.sh@941 -- # uname 00:10:06.795 16:57:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:06.795 16:57:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1640409 00:10:06.795 16:57:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:06.795 16:57:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:06.795 16:57:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1640409' 00:10:06.795 killing process with pid 1640409 00:10:06.796 16:57:22 -- common/autotest_common.sh@955 -- # kill 1640409 00:10:06.796 16:57:22 -- common/autotest_common.sh@960 -- # wait 1640409 00:10:07.368 16:57:22 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:07.368 16:57:22 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:07.368 00:10:07.368 real 0m53.141s 00:10:07.368 user 3m29.354s 00:10:07.368 sys 0m4.477s 00:10:07.368 16:57:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:07.368 16:57:22 -- common/autotest_common.sh@10 -- # set +x 00:10:07.368 ************************************ 00:10:07.368 END TEST nvmf_vfio_user 00:10:07.368 ************************************ 00:10:07.368 16:57:22 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:07.368 16:57:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:07.368 16:57:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:07.368 16:57:22 -- common/autotest_common.sh@10 -- # set +x 00:10:07.368 ************************************ 00:10:07.368 START TEST nvmf_vfio_user_nvme_compliance 00:10:07.368 ************************************ 00:10:07.368 16:57:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:07.368 * Looking for test storage... 00:10:07.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:07.368 16:57:22 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.368 16:57:22 -- nvmf/common.sh@7 -- # uname -s 00:10:07.368 16:57:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.368 16:57:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.368 16:57:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.368 16:57:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.368 16:57:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.368 16:57:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.368 16:57:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.368 16:57:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.368 16:57:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.368 16:57:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.368 16:57:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:07.368 16:57:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:07.368 16:57:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.368 16:57:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.368 16:57:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.368 16:57:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.368 16:57:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.368 16:57:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.368 16:57:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.368 16:57:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.369 16:57:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.369 16:57:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.369 16:57:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.369 16:57:22 -- paths/export.sh@5 -- # export PATH 00:10:07.369 16:57:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.369 16:57:22 -- nvmf/common.sh@47 -- # : 0 00:10:07.369 16:57:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.369 16:57:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.369 16:57:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.369 16:57:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.369 16:57:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.369 16:57:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.369 16:57:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.369 16:57:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.369 16:57:22 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.369 16:57:22 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.369 16:57:22 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:07.369 16:57:22 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:07.369 16:57:22 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:07.369 16:57:22 -- compliance/compliance.sh@20 -- # nvmfpid=1640912 00:10:07.369 16:57:22 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:07.369 16:57:22 -- compliance/compliance.sh@21 -- # echo 'Process pid: 1640912' 00:10:07.369 Process pid: 1640912 00:10:07.369 16:57:22 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:07.369 16:57:22 -- compliance/compliance.sh@24 -- # waitforlisten 1640912 00:10:07.369 16:57:22 -- common/autotest_common.sh@817 -- # '[' -z 1640912 ']' 00:10:07.369 16:57:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.369 16:57:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:07.369 16:57:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.369 16:57:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:07.369 16:57:22 -- common/autotest_common.sh@10 -- # set +x 00:10:07.369 [2024-04-18 16:57:23.015418] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:10:07.369 [2024-04-18 16:57:23.015508] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.369 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.628 [2024-04-18 16:57:23.079863] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.628 [2024-04-18 16:57:23.191063] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.628 [2024-04-18 16:57:23.191134] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.628 [2024-04-18 16:57:23.191170] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.628 [2024-04-18 16:57:23.191188] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.628 [2024-04-18 16:57:23.191202] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.628 [2024-04-18 16:57:23.191332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.628 [2024-04-18 16:57:23.191403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.628 [2024-04-18 16:57:23.191409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.628 16:57:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:07.628 16:57:23 -- common/autotest_common.sh@850 -- # return 0 00:10:07.628 16:57:23 -- compliance/compliance.sh@26 -- # sleep 1 00:10:09.006 16:57:24 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:09.006 16:57:24 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:09.006 16:57:24 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:09.006 16:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.006 16:57:24 -- common/autotest_common.sh@10 -- # set +x 00:10:09.006 16:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.006 16:57:24 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:09.006 16:57:24 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:09.006 16:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.006 16:57:24 -- common/autotest_common.sh@10 -- # set +x 00:10:09.006 malloc0 00:10:09.006 16:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.006 16:57:24 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:09.006 16:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.006 16:57:24 -- common/autotest_common.sh@10 -- # set +x 00:10:09.006 16:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.006 16:57:24 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:09.006 16:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.006 16:57:24 -- common/autotest_common.sh@10 -- # set +x 00:10:09.006 16:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.006 16:57:24 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:09.006 16:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.006 16:57:24 -- common/autotest_common.sh@10 -- # set +x 00:10:09.006 16:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.006 16:57:24 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:09.006 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.006 00:10:09.006 00:10:09.006 CUnit - A unit testing framework for C - Version 2.1-3 00:10:09.006 http://cunit.sourceforge.net/ 00:10:09.006 00:10:09.006 00:10:09.006 Suite: nvme_compliance 00:10:09.006 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-18 16:57:24.539011] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.006 [2024-04-18 16:57:24.540476] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:09.006 [2024-04-18 16:57:24.540502] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:09.006 [2024-04-18 16:57:24.540514] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:09.006 [2024-04-18 16:57:24.542027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:09.007 passed 00:10:09.007 Test: admin_identify_ctrlr_verify_fused ...[2024-04-18 16:57:24.629627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.007 [2024-04-18 16:57:24.632645] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:09.007 passed 00:10:09.266 Test: admin_identify_ns ...[2024-04-18 16:57:24.719962] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.266 [2024-04-18 16:57:24.779400] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:09.266 [2024-04-18 16:57:24.787399] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:09.266 [2024-04-18 16:57:24.808526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:09.266 passed 00:10:09.266 Test: admin_get_features_mandatory_features ...[2024-04-18 16:57:24.892185] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.267 [2024-04-18 16:57:24.895208] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:09.267 passed 00:10:09.556 Test: admin_get_features_optional_features ...[2024-04-18 16:57:24.979810] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.556 [2024-04-18 16:57:24.982845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:09.556 passed 00:10:09.556 Test: admin_set_features_number_of_queues ...[2024-04-18 16:57:25.064253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.556 [2024-04-18 16:57:25.173492] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:09.556 passed 00:10:09.816 Test: admin_get_log_page_mandatory_logs ...[2024-04-18 16:57:25.257334] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.816 [2024-04-18 16:57:25.260355] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:09.816 passed 00:10:09.816 Test: admin_get_log_page_with_lpo ...[2024-04-18 16:57:25.343707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.816 [2024-04-18 16:57:25.415396] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:09.816 [2024-04-18 16:57:25.428501] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:09.816 passed 00:10:09.816 Test: fabric_property_get ...[2024-04-18 16:57:25.509159] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:09.816 [2024-04-18 16:57:25.510443] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:09.816 [2024-04-18 16:57:25.514189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:10.076 passed 00:10:10.076 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-18 16:57:25.597763] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:10.076 [2024-04-18 16:57:25.602640] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:10.076 [2024-04-18 16:57:25.603799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:10.076 passed 00:10:10.076 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-18 16:57:25.685977] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:10.076 [2024-04-18 16:57:25.769389] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:10.336 [2024-04-18 16:57:25.785396] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:10.336 [2024-04-18 16:57:25.790513] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:10.336 passed 00:10:10.336 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-18 16:57:25.874146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:10.336 [2024-04-18 16:57:25.875470] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:10.336 [2024-04-18 16:57:25.877166] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:10.336 passed 00:10:10.336 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-18 16:57:25.959411] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:10.336 [2024-04-18 16:57:26.038410] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:10.596 [2024-04-18 16:57:26.062394] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:10.596 [2024-04-18 16:57:26.065500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:10.596 passed 00:10:10.596 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-18 16:57:26.150223] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:10.596 [2024-04-18 16:57:26.151549] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:10.596 [2024-04-18 16:57:26.151591] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:10.596 [2024-04-18 16:57:26.153248] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:10.596 passed 00:10:10.596 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-18 16:57:26.235473] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:10.856 [2024-04-18 16:57:26.329406] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:10.856 [2024-04-18 16:57:26.337412] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:10.856 [2024-04-18 16:57:26.345396] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:10.856 [2024-04-18 16:57:26.353422] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:10.856 [2024-04-18 16:57:26.382519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:10.856 passed 00:10:10.856 Test: admin_create_io_sq_verify_pc ...[2024-04-18 16:57:26.463789] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:10.856 [2024-04-18 16:57:26.480405] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:10.856 [2024-04-18 16:57:26.497224] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:10.856 passed 00:10:11.114 Test: admin_create_io_qp_max_qps ...[2024-04-18 16:57:26.584857] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:12.055 [2024-04-18 16:57:27.691398] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:12.624 [2024-04-18 16:57:28.078135] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:12.624 passed 00:10:12.624 Test: admin_create_io_sq_shared_cq ...[2024-04-18 16:57:28.160447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:12.624 [2024-04-18 16:57:28.294408] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:12.883 [2024-04-18 16:57:28.331499] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:12.883 passed 00:10:12.883 00:10:12.883 Run Summary: Type Total Ran Passed Failed Inactive 00:10:12.883 suites 1 1 n/a 0 0 00:10:12.883 tests 18 18 18 0 0 00:10:12.883 asserts 360 360 360 0 n/a 00:10:12.883 00:10:12.883 Elapsed time = 1.573 seconds 00:10:12.883 16:57:28 -- compliance/compliance.sh@42 -- # killprocess 1640912 00:10:12.883 16:57:28 -- common/autotest_common.sh@936 -- # '[' -z 1640912 ']' 00:10:12.883 16:57:28 -- common/autotest_common.sh@940 -- # kill -0 1640912 00:10:12.883 16:57:28 -- common/autotest_common.sh@941 -- # uname 00:10:12.883 16:57:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:12.883 16:57:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1640912 00:10:12.883 16:57:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:12.883 16:57:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:12.883 16:57:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1640912' 00:10:12.883 killing process with pid 1640912 00:10:12.883 16:57:28 -- common/autotest_common.sh@955 -- # kill 1640912 00:10:12.883 16:57:28 -- common/autotest_common.sh@960 -- # wait 1640912 00:10:13.143 16:57:28 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:13.143 16:57:28 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:13.143 00:10:13.143 real 0m5.818s 00:10:13.143 user 0m16.235s 00:10:13.143 sys 0m0.570s 00:10:13.143 16:57:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:13.143 16:57:28 -- common/autotest_common.sh@10 -- # set +x 00:10:13.143 ************************************ 00:10:13.143 END TEST nvmf_vfio_user_nvme_compliance 00:10:13.143 ************************************ 00:10:13.143 16:57:28 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:13.143 16:57:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:13.143 16:57:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:13.143 16:57:28 -- common/autotest_common.sh@10 -- # set +x 00:10:13.143 ************************************ 00:10:13.143 START TEST nvmf_vfio_user_fuzz 00:10:13.143 ************************************ 00:10:13.143 16:57:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:13.402 * Looking for test storage... 00:10:13.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.402 16:57:28 -- nvmf/common.sh@7 -- # uname -s 00:10:13.402 16:57:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.402 16:57:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.402 16:57:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.402 16:57:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.402 16:57:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.402 16:57:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.402 16:57:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.402 16:57:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.402 16:57:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.402 16:57:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.402 16:57:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.402 16:57:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.402 16:57:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.402 16:57:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.402 16:57:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.402 16:57:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.402 16:57:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.402 16:57:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.402 16:57:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.402 16:57:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.402 16:57:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.402 16:57:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.402 16:57:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.402 16:57:28 -- paths/export.sh@5 -- # export PATH 00:10:13.402 16:57:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.402 16:57:28 -- nvmf/common.sh@47 -- # : 0 00:10:13.402 16:57:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.402 16:57:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.402 16:57:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.402 16:57:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.402 16:57:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.402 16:57:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.402 16:57:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.402 16:57:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1641748 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1641748' 00:10:13.402 Process pid: 1641748 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:13.402 16:57:28 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1641748 00:10:13.402 16:57:28 -- common/autotest_common.sh@817 -- # '[' -z 1641748 ']' 00:10:13.402 16:57:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.402 16:57:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:13.402 16:57:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.402 16:57:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:13.402 16:57:28 -- common/autotest_common.sh@10 -- # set +x 00:10:13.660 16:57:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:13.660 16:57:29 -- common/autotest_common.sh@850 -- # return 0 00:10:13.660 16:57:29 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:14.611 16:57:30 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:14.611 16:57:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:14.611 16:57:30 -- common/autotest_common.sh@10 -- # set +x 00:10:14.611 16:57:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:14.611 16:57:30 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:14.611 16:57:30 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:14.611 16:57:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:14.611 16:57:30 -- common/autotest_common.sh@10 -- # set +x 00:10:14.611 malloc0 00:10:14.611 16:57:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:14.611 16:57:30 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:14.611 16:57:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:14.611 16:57:30 -- common/autotest_common.sh@10 -- # set +x 00:10:14.611 16:57:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:14.611 16:57:30 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:14.611 16:57:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:14.611 16:57:30 -- common/autotest_common.sh@10 -- # set +x 00:10:14.611 16:57:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:14.611 16:57:30 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:14.611 16:57:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:14.611 16:57:30 -- common/autotest_common.sh@10 -- # set +x 00:10:14.611 16:57:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:14.611 16:57:30 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:14.611 16:57:30 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:10:46.713 Fuzzing completed. Shutting down the fuzz application 00:10:46.713 00:10:46.713 Dumping successful admin opcodes: 00:10:46.713 8, 9, 10, 24, 00:10:46.713 Dumping successful io opcodes: 00:10:46.713 0, 00:10:46.713 NS: 0x200003a1ef00 I/O qp, Total commands completed: 673514, total successful commands: 2621, random_seed: 933681344 00:10:46.713 NS: 0x200003a1ef00 admin qp, Total commands completed: 89366, total successful commands: 716, random_seed: 1854940288 00:10:46.713 16:58:00 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:10:46.713 16:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:46.713 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:10:46.713 16:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:46.713 16:58:00 -- target/vfio_user_fuzz.sh@46 -- # killprocess 1641748 00:10:46.713 16:58:00 -- common/autotest_common.sh@936 -- # '[' -z 1641748 ']' 00:10:46.713 16:58:00 -- common/autotest_common.sh@940 -- # kill -0 1641748 00:10:46.713 16:58:00 -- common/autotest_common.sh@941 -- # uname 00:10:46.713 16:58:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:46.713 16:58:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1641748 00:10:46.713 16:58:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:46.713 16:58:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:46.713 16:58:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1641748' 00:10:46.713 killing process with pid 1641748 00:10:46.714 16:58:00 -- common/autotest_common.sh@955 -- # kill 1641748 00:10:46.714 16:58:00 -- common/autotest_common.sh@960 -- # wait 1641748 00:10:46.714 16:58:01 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:10:46.714 16:58:01 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:10:46.714 00:10:46.714 real 0m32.359s 00:10:46.714 user 0m33.439s 00:10:46.714 sys 0m26.040s 00:10:46.714 16:58:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:46.714 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:10:46.714 ************************************ 00:10:46.714 END TEST nvmf_vfio_user_fuzz 00:10:46.714 ************************************ 00:10:46.714 16:58:01 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:46.714 16:58:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:46.714 16:58:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.714 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:10:46.714 ************************************ 00:10:46.714 START TEST nvmf_host_management 00:10:46.714 ************************************ 00:10:46.714 16:58:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:46.714 * Looking for test storage... 00:10:46.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.714 16:58:01 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.714 16:58:01 -- nvmf/common.sh@7 -- # uname -s 00:10:46.714 16:58:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.714 16:58:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.714 16:58:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.714 16:58:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.714 16:58:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.714 16:58:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.714 16:58:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.714 16:58:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.714 16:58:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.714 16:58:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.714 16:58:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:46.714 16:58:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:46.714 16:58:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.714 16:58:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.714 16:58:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.714 16:58:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.714 16:58:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.714 16:58:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.714 16:58:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.714 16:58:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.714 16:58:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.714 16:58:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.714 16:58:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.714 16:58:01 -- paths/export.sh@5 -- # export PATH 00:10:46.714 16:58:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.714 16:58:01 -- nvmf/common.sh@47 -- # : 0 00:10:46.714 16:58:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.714 16:58:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.714 16:58:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.714 16:58:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.714 16:58:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.714 16:58:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.714 16:58:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.714 16:58:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.714 16:58:01 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:46.714 16:58:01 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:46.714 16:58:01 -- target/host_management.sh@105 -- # nvmftestinit 00:10:46.714 16:58:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:46.714 16:58:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.714 16:58:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:46.714 16:58:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:46.714 16:58:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:46.714 16:58:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.714 16:58:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.714 16:58:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.714 16:58:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:46.714 16:58:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:46.714 16:58:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:46.714 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:10:48.089 16:58:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:48.089 16:58:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:48.089 16:58:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:48.089 16:58:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:48.089 16:58:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:48.089 16:58:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:48.089 16:58:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:48.090 16:58:03 -- nvmf/common.sh@295 -- # net_devs=() 00:10:48.090 16:58:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:48.090 16:58:03 -- nvmf/common.sh@296 -- # e810=() 00:10:48.090 16:58:03 -- nvmf/common.sh@296 -- # local -ga e810 00:10:48.090 16:58:03 -- nvmf/common.sh@297 -- # x722=() 00:10:48.090 16:58:03 -- nvmf/common.sh@297 -- # local -ga x722 00:10:48.090 16:58:03 -- nvmf/common.sh@298 -- # mlx=() 00:10:48.090 16:58:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:48.090 16:58:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.090 16:58:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.090 16:58:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.090 16:58:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.090 16:58:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.090 16:58:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.090 16:58:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.090 16:58:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.090 16:58:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.090 16:58:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.090 16:58:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.090 16:58:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:48.090 16:58:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:48.090 16:58:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:48.090 16:58:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.090 16:58:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:48.090 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:48.090 16:58:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.090 16:58:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:48.090 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:48.090 16:58:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:48.090 16:58:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.090 16:58:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.090 16:58:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:48.090 16:58:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.090 16:58:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:48.090 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:48.090 16:58:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.090 16:58:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.090 16:58:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.090 16:58:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:48.090 16:58:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.090 16:58:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:48.090 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:48.090 16:58:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.090 16:58:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:48.090 16:58:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:48.090 16:58:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:48.090 16:58:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.090 16:58:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.090 16:58:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.090 16:58:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:48.090 16:58:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.090 16:58:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.090 16:58:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:48.090 16:58:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.090 16:58:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.090 16:58:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:48.090 16:58:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:48.090 16:58:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.090 16:58:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.090 16:58:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.090 16:58:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.090 16:58:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:48.090 16:58:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.090 16:58:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.090 16:58:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.090 16:58:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:48.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:10:48.090 00:10:48.090 --- 10.0.0.2 ping statistics --- 00:10:48.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.090 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:10:48.090 16:58:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:10:48.090 00:10:48.090 --- 10.0.0.1 ping statistics --- 00:10:48.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.090 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:10:48.090 16:58:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.090 16:58:03 -- nvmf/common.sh@411 -- # return 0 00:10:48.090 16:58:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:48.090 16:58:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.090 16:58:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:48.090 16:58:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.090 16:58:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:48.090 16:58:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:48.090 16:58:03 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:10:48.090 16:58:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:48.090 16:58:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:48.090 16:58:03 -- common/autotest_common.sh@10 -- # set +x 00:10:48.090 ************************************ 00:10:48.090 START TEST nvmf_host_management 00:10:48.090 ************************************ 00:10:48.090 16:58:03 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:10:48.090 16:58:03 -- target/host_management.sh@69 -- # starttarget 00:10:48.090 16:58:03 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:48.090 16:58:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:48.090 16:58:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:48.090 16:58:03 -- common/autotest_common.sh@10 -- # set +x 00:10:48.090 16:58:03 -- nvmf/common.sh@470 -- # nvmfpid=1647103 00:10:48.090 16:58:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:48.090 16:58:03 -- nvmf/common.sh@471 -- # waitforlisten 1647103 00:10:48.090 16:58:03 -- common/autotest_common.sh@817 -- # '[' -z 1647103 ']' 00:10:48.090 16:58:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.090 16:58:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:48.090 16:58:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.090 16:58:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:48.090 16:58:03 -- common/autotest_common.sh@10 -- # set +x 00:10:48.090 [2024-04-18 16:58:03.692629] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:10:48.090 [2024-04-18 16:58:03.692701] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.090 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.090 [2024-04-18 16:58:03.761663] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.350 [2024-04-18 16:58:03.882023] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.350 [2024-04-18 16:58:03.882074] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.350 [2024-04-18 16:58:03.882089] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.350 [2024-04-18 16:58:03.882102] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.350 [2024-04-18 16:58:03.882112] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.350 [2024-04-18 16:58:03.882211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.350 [2024-04-18 16:58:03.882261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.350 [2024-04-18 16:58:03.882291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:48.350 [2024-04-18 16:58:03.882293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.919 16:58:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:48.919 16:58:04 -- common/autotest_common.sh@850 -- # return 0 00:10:49.177 16:58:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:49.177 16:58:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:49.177 16:58:04 -- common/autotest_common.sh@10 -- # set +x 00:10:49.177 16:58:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.177 16:58:04 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.177 16:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.177 16:58:04 -- common/autotest_common.sh@10 -- # set +x 00:10:49.177 [2024-04-18 16:58:04.645246] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.177 16:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.177 16:58:04 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:49.177 16:58:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:49.177 16:58:04 -- common/autotest_common.sh@10 -- # set +x 00:10:49.177 16:58:04 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:49.177 16:58:04 -- target/host_management.sh@23 -- # cat 00:10:49.177 16:58:04 -- target/host_management.sh@30 -- # rpc_cmd 00:10:49.177 16:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.177 16:58:04 -- common/autotest_common.sh@10 -- # set +x 00:10:49.177 Malloc0 00:10:49.178 [2024-04-18 16:58:04.704176] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.178 16:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.178 16:58:04 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:49.178 16:58:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:49.178 16:58:04 -- common/autotest_common.sh@10 -- # set +x 00:10:49.178 16:58:04 -- target/host_management.sh@73 -- # perfpid=1647276 00:10:49.178 16:58:04 -- target/host_management.sh@74 -- # waitforlisten 1647276 /var/tmp/bdevperf.sock 00:10:49.178 16:58:04 -- common/autotest_common.sh@817 -- # '[' -z 1647276 ']' 00:10:49.178 16:58:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:49.178 16:58:04 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:49.178 16:58:04 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:49.178 16:58:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:49.178 16:58:04 -- nvmf/common.sh@521 -- # config=() 00:10:49.178 16:58:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:49.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:49.178 16:58:04 -- nvmf/common.sh@521 -- # local subsystem config 00:10:49.178 16:58:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:49.178 16:58:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:49.178 16:58:04 -- common/autotest_common.sh@10 -- # set +x 00:10:49.178 16:58:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:49.178 { 00:10:49.178 "params": { 00:10:49.178 "name": "Nvme$subsystem", 00:10:49.178 "trtype": "$TEST_TRANSPORT", 00:10:49.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.178 "adrfam": "ipv4", 00:10:49.178 "trsvcid": "$NVMF_PORT", 00:10:49.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.178 "hdgst": ${hdgst:-false}, 00:10:49.178 "ddgst": ${ddgst:-false} 00:10:49.178 }, 00:10:49.178 "method": "bdev_nvme_attach_controller" 00:10:49.178 } 00:10:49.178 EOF 00:10:49.178 )") 00:10:49.178 16:58:04 -- nvmf/common.sh@543 -- # cat 00:10:49.178 16:58:04 -- nvmf/common.sh@545 -- # jq . 00:10:49.178 16:58:04 -- nvmf/common.sh@546 -- # IFS=, 00:10:49.178 16:58:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:49.178 "params": { 00:10:49.178 "name": "Nvme0", 00:10:49.178 "trtype": "tcp", 00:10:49.178 "traddr": "10.0.0.2", 00:10:49.178 "adrfam": "ipv4", 00:10:49.178 "trsvcid": "4420", 00:10:49.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:49.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:49.178 "hdgst": false, 00:10:49.178 "ddgst": false 00:10:49.178 }, 00:10:49.178 "method": "bdev_nvme_attach_controller" 00:10:49.178 }' 00:10:49.178 [2024-04-18 16:58:04.774292] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:10:49.178 [2024-04-18 16:58:04.774398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647276 ] 00:10:49.178 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.178 [2024-04-18 16:58:04.835762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.438 [2024-04-18 16:58:04.944160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.697 Running I/O for 10 seconds... 00:10:49.697 16:58:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:49.697 16:58:05 -- common/autotest_common.sh@850 -- # return 0 00:10:49.697 16:58:05 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:49.697 16:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.697 16:58:05 -- common/autotest_common.sh@10 -- # set +x 00:10:49.697 16:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.697 16:58:05 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:49.697 16:58:05 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:49.697 16:58:05 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:49.697 16:58:05 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:49.697 16:58:05 -- target/host_management.sh@52 -- # local ret=1 00:10:49.697 16:58:05 -- target/host_management.sh@53 -- # local i 00:10:49.697 16:58:05 -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:49.697 16:58:05 -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:49.697 16:58:05 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:49.697 16:58:05 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:49.697 16:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.697 16:58:05 -- common/autotest_common.sh@10 -- # set +x 00:10:49.697 16:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.697 16:58:05 -- target/host_management.sh@55 -- # read_io_count=67 00:10:49.697 16:58:05 -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:10:49.697 16:58:05 -- target/host_management.sh@62 -- # sleep 0.25 00:10:49.956 16:58:05 -- target/host_management.sh@54 -- # (( i-- )) 00:10:49.956 16:58:05 -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:49.956 16:58:05 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:49.956 16:58:05 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:49.956 16:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.956 16:58:05 -- common/autotest_common.sh@10 -- # set +x 00:10:50.217 16:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.217 16:58:05 -- target/host_management.sh@55 -- # read_io_count=579 00:10:50.217 16:58:05 -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:10:50.217 16:58:05 -- target/host_management.sh@59 -- # ret=0 00:10:50.217 16:58:05 -- target/host_management.sh@60 -- # break 00:10:50.217 16:58:05 -- target/host_management.sh@64 -- # return 0 00:10:50.217 16:58:05 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:50.217 16:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.217 16:58:05 -- common/autotest_common.sh@10 -- # set +x 00:10:50.217 [2024-04-18 16:58:05.699412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116aec0 is same with the state(5) to be set 00:10:50.217 [2024-04-18 16:58:05.699526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116aec0 is same with the state(5) to be set 00:10:50.217 [2024-04-18 16:58:05.699542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116aec0 is same with the state(5) to be set 00:10:50.217 [2024-04-18 16:58:05.699555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116aec0 is same with the state(5) to be set 00:10:50.217 [2024-04-18 16:58:05.699567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116aec0 is same with the state(5) to be set 00:10:50.217 [2024-04-18 16:58:05.699580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116aec0 is same with the state(5) to be set 00:10:50.217 16:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.217 16:58:05 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:50.217 16:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.217 16:58:05 -- common/autotest_common.sh@10 -- # set +x 00:10:50.217 [2024-04-18 16:58:05.704746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.217 [2024-04-18 16:58:05.704790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.217 [2024-04-18 16:58:05.704808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.217 [2024-04-18 16:58:05.704822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.217 [2024-04-18 16:58:05.704836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.217 [2024-04-18 16:58:05.704850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.217 [2024-04-18 16:58:05.704864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.217 [2024-04-18 16:58:05.704877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.217 [2024-04-18 16:58:05.704901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75160 is same with the state(5) to be set 00:10:50.218 [2024-04-18 16:58:05.704973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.704994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.705985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.705999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.706013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.706026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.706041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.706054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.706069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.706081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.706100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.706113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.218 [2024-04-18 16:58:05.706128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.218 [2024-04-18 16:58:05.706141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:50.219 [2024-04-18 16:58:05.706895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.219 [2024-04-18 16:58:05.706983] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23a5db0 was disconnected and freed. reset controller. 00:10:50.219 [2024-04-18 16:58:05.708095] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:50.219 task offset: 81920 on job bdev=Nvme0n1 fails 00:10:50.219 00:10:50.219 Latency(us) 00:10:50.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.219 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:50.219 Job: Nvme0n1 ended in about 0.42 seconds with error 00:10:50.219 Verification LBA range: start 0x0 length 0x400 00:10:50.219 Nvme0n1 : 0.42 1537.32 96.08 153.73 0.00 36790.68 2803.48 34175.81 00:10:50.219 =================================================================================================================== 00:10:50.219 Total : 1537.32 96.08 153.73 0.00 36790.68 2803.48 34175.81 00:10:50.219 [2024-04-18 16:58:05.709969] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:50.219 [2024-04-18 16:58:05.709996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f75160 (9): Bad file descriptor 00:10:50.219 16:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.219 16:58:05 -- target/host_management.sh@87 -- # sleep 1 00:10:50.219 [2024-04-18 16:58:05.801748] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.156 16:58:06 -- target/host_management.sh@91 -- # kill -9 1647276 00:10:51.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1647276) - No such process 00:10:51.156 16:58:06 -- target/host_management.sh@91 -- # true 00:10:51.156 16:58:06 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:51.156 16:58:06 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:51.156 16:58:06 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:51.156 16:58:06 -- nvmf/common.sh@521 -- # config=() 00:10:51.156 16:58:06 -- nvmf/common.sh@521 -- # local subsystem config 00:10:51.156 16:58:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:51.156 16:58:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:51.156 { 00:10:51.156 "params": { 00:10:51.156 "name": "Nvme$subsystem", 00:10:51.156 "trtype": "$TEST_TRANSPORT", 00:10:51.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.156 "adrfam": "ipv4", 00:10:51.156 "trsvcid": "$NVMF_PORT", 00:10:51.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.156 "hdgst": ${hdgst:-false}, 00:10:51.156 "ddgst": ${ddgst:-false} 00:10:51.156 }, 00:10:51.156 "method": "bdev_nvme_attach_controller" 00:10:51.156 } 00:10:51.156 EOF 00:10:51.156 )") 00:10:51.156 16:58:06 -- nvmf/common.sh@543 -- # cat 00:10:51.156 16:58:06 -- nvmf/common.sh@545 -- # jq . 00:10:51.156 16:58:06 -- nvmf/common.sh@546 -- # IFS=, 00:10:51.156 16:58:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:51.156 "params": { 00:10:51.156 "name": "Nvme0", 00:10:51.156 "trtype": "tcp", 00:10:51.156 "traddr": "10.0.0.2", 00:10:51.156 "adrfam": "ipv4", 00:10:51.156 "trsvcid": "4420", 00:10:51.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:51.156 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:51.156 "hdgst": false, 00:10:51.156 "ddgst": false 00:10:51.156 }, 00:10:51.156 "method": "bdev_nvme_attach_controller" 00:10:51.156 }' 00:10:51.156 [2024-04-18 16:58:06.757905] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:10:51.156 [2024-04-18 16:58:06.758001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647551 ] 00:10:51.156 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.156 [2024-04-18 16:58:06.818291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.417 [2024-04-18 16:58:06.927791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.675 Running I/O for 1 seconds... 00:10:52.610 00:10:52.610 Latency(us) 00:10:52.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.610 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:52.610 Verification LBA range: start 0x0 length 0x400 00:10:52.610 Nvme0n1 : 1.02 1513.19 94.57 0.00 0.00 41638.78 8738.13 35340.89 00:10:52.610 =================================================================================================================== 00:10:52.610 Total : 1513.19 94.57 0.00 0.00 41638.78 8738.13 35340.89 00:10:52.881 16:58:08 -- target/host_management.sh@102 -- # stoptarget 00:10:52.881 16:58:08 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:52.881 16:58:08 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:52.881 16:58:08 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:52.881 16:58:08 -- target/host_management.sh@40 -- # nvmftestfini 00:10:52.881 16:58:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:52.881 16:58:08 -- nvmf/common.sh@117 -- # sync 00:10:52.881 16:58:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:52.881 16:58:08 -- nvmf/common.sh@120 -- # set +e 00:10:52.881 16:58:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.881 16:58:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:52.881 rmmod nvme_tcp 00:10:52.881 rmmod nvme_fabrics 00:10:52.881 rmmod nvme_keyring 00:10:52.881 16:58:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.881 16:58:08 -- nvmf/common.sh@124 -- # set -e 00:10:52.881 16:58:08 -- nvmf/common.sh@125 -- # return 0 00:10:52.881 16:58:08 -- nvmf/common.sh@478 -- # '[' -n 1647103 ']' 00:10:52.881 16:58:08 -- nvmf/common.sh@479 -- # killprocess 1647103 00:10:52.881 16:58:08 -- common/autotest_common.sh@936 -- # '[' -z 1647103 ']' 00:10:52.881 16:58:08 -- common/autotest_common.sh@940 -- # kill -0 1647103 00:10:52.881 16:58:08 -- common/autotest_common.sh@941 -- # uname 00:10:52.881 16:58:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:52.881 16:58:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1647103 00:10:52.881 16:58:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:52.881 16:58:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:52.881 16:58:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1647103' 00:10:52.881 killing process with pid 1647103 00:10:52.881 16:58:08 -- common/autotest_common.sh@955 -- # kill 1647103 00:10:52.881 16:58:08 -- common/autotest_common.sh@960 -- # wait 1647103 00:10:53.180 [2024-04-18 16:58:08.809883] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:53.180 16:58:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:53.180 16:58:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:53.180 16:58:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:53.180 16:58:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.180 16:58:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:53.180 16:58:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.180 16:58:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.180 16:58:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.716 16:58:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:55.716 00:10:55.716 real 0m7.243s 00:10:55.716 user 0m22.121s 00:10:55.716 sys 0m1.291s 00:10:55.716 16:58:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:55.716 16:58:10 -- common/autotest_common.sh@10 -- # set +x 00:10:55.716 ************************************ 00:10:55.716 END TEST nvmf_host_management 00:10:55.716 ************************************ 00:10:55.716 16:58:10 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:55.716 00:10:55.716 real 0m9.590s 00:10:55.716 user 0m22.982s 00:10:55.716 sys 0m2.799s 00:10:55.716 16:58:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:55.716 16:58:10 -- common/autotest_common.sh@10 -- # set +x 00:10:55.716 ************************************ 00:10:55.716 END TEST nvmf_host_management 00:10:55.716 ************************************ 00:10:55.716 16:58:10 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:55.716 16:58:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:55.716 16:58:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.716 16:58:10 -- common/autotest_common.sh@10 -- # set +x 00:10:55.716 ************************************ 00:10:55.716 START TEST nvmf_lvol 00:10:55.716 ************************************ 00:10:55.716 16:58:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:55.716 * Looking for test storage... 00:10:55.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.716 16:58:11 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.716 16:58:11 -- nvmf/common.sh@7 -- # uname -s 00:10:55.716 16:58:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.716 16:58:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.716 16:58:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.716 16:58:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.716 16:58:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.716 16:58:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.716 16:58:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.716 16:58:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.716 16:58:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.716 16:58:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.716 16:58:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.716 16:58:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.716 16:58:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.716 16:58:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.716 16:58:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.716 16:58:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.716 16:58:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.716 16:58:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.716 16:58:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.716 16:58:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.717 16:58:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.717 16:58:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.717 16:58:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.717 16:58:11 -- paths/export.sh@5 -- # export PATH 00:10:55.717 16:58:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.717 16:58:11 -- nvmf/common.sh@47 -- # : 0 00:10:55.717 16:58:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.717 16:58:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.717 16:58:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.717 16:58:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.717 16:58:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.717 16:58:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.717 16:58:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.717 16:58:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.717 16:58:11 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.717 16:58:11 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.717 16:58:11 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:55.717 16:58:11 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:55.717 16:58:11 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:55.717 16:58:11 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:55.717 16:58:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:55.717 16:58:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.717 16:58:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:55.717 16:58:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:55.717 16:58:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:55.717 16:58:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.717 16:58:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.717 16:58:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.717 16:58:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:55.717 16:58:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:55.717 16:58:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.717 16:58:11 -- common/autotest_common.sh@10 -- # set +x 00:10:57.621 16:58:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:57.621 16:58:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:57.621 16:58:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:57.621 16:58:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:57.621 16:58:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:57.621 16:58:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:57.621 16:58:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:57.621 16:58:13 -- nvmf/common.sh@295 -- # net_devs=() 00:10:57.621 16:58:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:57.621 16:58:13 -- nvmf/common.sh@296 -- # e810=() 00:10:57.621 16:58:13 -- nvmf/common.sh@296 -- # local -ga e810 00:10:57.621 16:58:13 -- nvmf/common.sh@297 -- # x722=() 00:10:57.621 16:58:13 -- nvmf/common.sh@297 -- # local -ga x722 00:10:57.621 16:58:13 -- nvmf/common.sh@298 -- # mlx=() 00:10:57.621 16:58:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:57.621 16:58:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.621 16:58:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.621 16:58:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.621 16:58:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.621 16:58:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.621 16:58:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.621 16:58:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.621 16:58:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.621 16:58:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.621 16:58:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.621 16:58:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.621 16:58:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:57.621 16:58:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:57.621 16:58:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:57.621 16:58:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.621 16:58:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:57.621 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:57.621 16:58:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.621 16:58:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:57.621 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:57.621 16:58:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:57.621 16:58:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.621 16:58:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.621 16:58:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:57.621 16:58:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.621 16:58:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:57.621 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:57.621 16:58:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.621 16:58:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.621 16:58:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.621 16:58:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:57.621 16:58:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.621 16:58:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:57.621 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:57.621 16:58:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.621 16:58:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:57.621 16:58:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:57.621 16:58:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:57.621 16:58:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.621 16:58:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.621 16:58:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.621 16:58:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:57.621 16:58:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.621 16:58:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.621 16:58:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:57.621 16:58:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.621 16:58:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.621 16:58:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:57.621 16:58:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:57.621 16:58:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.621 16:58:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.621 16:58:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.621 16:58:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.621 16:58:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:57.621 16:58:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.621 16:58:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.621 16:58:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.621 16:58:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:57.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:10:57.621 00:10:57.621 --- 10.0.0.2 ping statistics --- 00:10:57.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.621 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:10:57.621 16:58:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:10:57.621 00:10:57.621 --- 10.0.0.1 ping statistics --- 00:10:57.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.621 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:10:57.621 16:58:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.621 16:58:13 -- nvmf/common.sh@411 -- # return 0 00:10:57.621 16:58:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:57.621 16:58:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.621 16:58:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:57.621 16:58:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.621 16:58:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:57.621 16:58:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:57.621 16:58:13 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:57.621 16:58:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:57.621 16:58:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:57.621 16:58:13 -- common/autotest_common.sh@10 -- # set +x 00:10:57.621 16:58:13 -- nvmf/common.sh@470 -- # nvmfpid=1649776 00:10:57.621 16:58:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:57.621 16:58:13 -- nvmf/common.sh@471 -- # waitforlisten 1649776 00:10:57.881 16:58:13 -- common/autotest_common.sh@817 -- # '[' -z 1649776 ']' 00:10:57.881 16:58:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.881 16:58:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:57.881 16:58:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.881 16:58:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:57.881 16:58:13 -- common/autotest_common.sh@10 -- # set +x 00:10:57.881 [2024-04-18 16:58:13.372044] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:10:57.881 [2024-04-18 16:58:13.372138] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.881 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.881 [2024-04-18 16:58:13.437013] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:57.881 [2024-04-18 16:58:13.545424] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.881 [2024-04-18 16:58:13.545487] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.881 [2024-04-18 16:58:13.545517] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.881 [2024-04-18 16:58:13.545529] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.881 [2024-04-18 16:58:13.545539] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.881 [2024-04-18 16:58:13.545614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.881 [2024-04-18 16:58:13.545645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.881 [2024-04-18 16:58:13.545648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.138 16:58:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:58.138 16:58:13 -- common/autotest_common.sh@850 -- # return 0 00:10:58.138 16:58:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:58.139 16:58:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:58.139 16:58:13 -- common/autotest_common.sh@10 -- # set +x 00:10:58.139 16:58:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.139 16:58:13 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:58.395 [2024-04-18 16:58:13.899283] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.396 16:58:13 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.653 16:58:14 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:58.653 16:58:14 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.911 16:58:14 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:58.911 16:58:14 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:59.169 16:58:14 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:59.428 16:58:15 -- target/nvmf_lvol.sh@29 -- # lvs=3f752fdc-20d9-4542-97ee-37858184d97c 00:10:59.428 16:58:15 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3f752fdc-20d9-4542-97ee-37858184d97c lvol 20 00:10:59.685 16:58:15 -- target/nvmf_lvol.sh@32 -- # lvol=230f602b-0d56-4840-ba99-c266a86e9929 00:10:59.685 16:58:15 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:59.942 16:58:15 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 230f602b-0d56-4840-ba99-c266a86e9929 00:11:00.201 16:58:15 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:00.458 [2024-04-18 16:58:16.057287] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.458 16:58:16 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:00.716 16:58:16 -- target/nvmf_lvol.sh@42 -- # perf_pid=1650201 00:11:00.716 16:58:16 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:00.716 16:58:16 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:00.716 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.650 16:58:17 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 230f602b-0d56-4840-ba99-c266a86e9929 MY_SNAPSHOT 00:11:02.216 16:58:17 -- target/nvmf_lvol.sh@47 -- # snapshot=ff43096d-067f-40e8-97a8-a0b6b00f0041 00:11:02.216 16:58:17 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 230f602b-0d56-4840-ba99-c266a86e9929 30 00:11:02.474 16:58:17 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ff43096d-067f-40e8-97a8-a0b6b00f0041 MY_CLONE 00:11:02.733 16:58:18 -- target/nvmf_lvol.sh@49 -- # clone=4f0c1257-bae7-4b84-9d85-e31469347914 00:11:02.733 16:58:18 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4f0c1257-bae7-4b84-9d85-e31469347914 00:11:03.299 16:58:18 -- target/nvmf_lvol.sh@53 -- # wait 1650201 00:11:11.404 Initializing NVMe Controllers 00:11:11.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:11.404 Controller IO queue size 128, less than required. 00:11:11.404 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:11.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:11.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:11.404 Initialization complete. Launching workers. 00:11:11.404 ======================================================== 00:11:11.404 Latency(us) 00:11:11.404 Device Information : IOPS MiB/s Average min max 00:11:11.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9801.66 38.29 13063.49 567.82 62413.50 00:11:11.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10603.56 41.42 12074.72 2390.69 59243.72 00:11:11.404 ======================================================== 00:11:11.404 Total : 20405.22 79.71 12549.68 567.82 62413.50 00:11:11.404 00:11:11.404 16:58:26 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:11.404 16:58:26 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 230f602b-0d56-4840-ba99-c266a86e9929 00:11:11.663 16:58:27 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f752fdc-20d9-4542-97ee-37858184d97c 00:11:11.921 16:58:27 -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:11.921 16:58:27 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:11.921 16:58:27 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:11.921 16:58:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:11.921 16:58:27 -- nvmf/common.sh@117 -- # sync 00:11:11.921 16:58:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:11.921 16:58:27 -- nvmf/common.sh@120 -- # set +e 00:11:11.921 16:58:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:11.921 16:58:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:11.921 rmmod nvme_tcp 00:11:11.921 rmmod nvme_fabrics 00:11:11.921 rmmod nvme_keyring 00:11:11.921 16:58:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:11.921 16:58:27 -- nvmf/common.sh@124 -- # set -e 00:11:11.921 16:58:27 -- nvmf/common.sh@125 -- # return 0 00:11:11.921 16:58:27 -- nvmf/common.sh@478 -- # '[' -n 1649776 ']' 00:11:11.921 16:58:27 -- nvmf/common.sh@479 -- # killprocess 1649776 00:11:11.921 16:58:27 -- common/autotest_common.sh@936 -- # '[' -z 1649776 ']' 00:11:11.921 16:58:27 -- common/autotest_common.sh@940 -- # kill -0 1649776 00:11:11.921 16:58:27 -- common/autotest_common.sh@941 -- # uname 00:11:11.921 16:58:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:11.921 16:58:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1649776 00:11:11.921 16:58:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:11.921 16:58:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:11.921 16:58:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1649776' 00:11:11.921 killing process with pid 1649776 00:11:11.921 16:58:27 -- common/autotest_common.sh@955 -- # kill 1649776 00:11:11.921 16:58:27 -- common/autotest_common.sh@960 -- # wait 1649776 00:11:12.180 16:58:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:12.180 16:58:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:12.180 16:58:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:12.180 16:58:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:12.180 16:58:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:12.180 16:58:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.180 16:58:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.180 16:58:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.715 16:58:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:14.715 00:11:14.715 real 0m18.861s 00:11:14.715 user 1m2.782s 00:11:14.715 sys 0m6.186s 00:11:14.715 16:58:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:14.715 16:58:29 -- common/autotest_common.sh@10 -- # set +x 00:11:14.715 ************************************ 00:11:14.715 END TEST nvmf_lvol 00:11:14.715 ************************************ 00:11:14.715 16:58:29 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:14.715 16:58:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:14.715 16:58:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.715 16:58:29 -- common/autotest_common.sh@10 -- # set +x 00:11:14.715 ************************************ 00:11:14.715 START TEST nvmf_lvs_grow 00:11:14.715 ************************************ 00:11:14.715 16:58:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:14.715 * Looking for test storage... 00:11:14.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.715 16:58:30 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.715 16:58:30 -- nvmf/common.sh@7 -- # uname -s 00:11:14.715 16:58:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.715 16:58:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.715 16:58:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.715 16:58:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.715 16:58:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.715 16:58:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.715 16:58:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.715 16:58:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.715 16:58:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.715 16:58:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.715 16:58:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.715 16:58:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.715 16:58:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.715 16:58:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.715 16:58:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.715 16:58:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.715 16:58:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.715 16:58:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.715 16:58:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.715 16:58:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.715 16:58:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.715 16:58:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.715 16:58:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.715 16:58:30 -- paths/export.sh@5 -- # export PATH 00:11:14.715 16:58:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.715 16:58:30 -- nvmf/common.sh@47 -- # : 0 00:11:14.715 16:58:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:14.715 16:58:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:14.715 16:58:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.715 16:58:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.715 16:58:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.715 16:58:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:14.715 16:58:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:14.715 16:58:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:14.715 16:58:30 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:14.715 16:58:30 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:14.715 16:58:30 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:11:14.715 16:58:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:14.715 16:58:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.715 16:58:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:14.715 16:58:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:14.715 16:58:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:14.715 16:58:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.715 16:58:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.715 16:58:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.715 16:58:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:14.715 16:58:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:14.715 16:58:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:14.715 16:58:30 -- common/autotest_common.sh@10 -- # set +x 00:11:16.625 16:58:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:16.625 16:58:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:16.625 16:58:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:16.625 16:58:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:16.625 16:58:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:16.625 16:58:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:16.625 16:58:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:16.625 16:58:32 -- nvmf/common.sh@295 -- # net_devs=() 00:11:16.625 16:58:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:16.625 16:58:32 -- nvmf/common.sh@296 -- # e810=() 00:11:16.625 16:58:32 -- nvmf/common.sh@296 -- # local -ga e810 00:11:16.625 16:58:32 -- nvmf/common.sh@297 -- # x722=() 00:11:16.625 16:58:32 -- nvmf/common.sh@297 -- # local -ga x722 00:11:16.625 16:58:32 -- nvmf/common.sh@298 -- # mlx=() 00:11:16.625 16:58:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:16.625 16:58:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.625 16:58:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.625 16:58:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.625 16:58:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.625 16:58:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.625 16:58:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.625 16:58:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.625 16:58:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.625 16:58:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.625 16:58:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.625 16:58:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.625 16:58:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:16.625 16:58:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:16.625 16:58:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:16.625 16:58:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.625 16:58:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:16.625 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:16.625 16:58:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.625 16:58:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:16.625 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:16.625 16:58:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:16.625 16:58:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:16.625 16:58:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.625 16:58:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.625 16:58:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:16.625 16:58:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.625 16:58:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:16.625 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:16.625 16:58:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.625 16:58:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.625 16:58:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.625 16:58:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:16.626 16:58:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.626 16:58:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:16.626 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:16.626 16:58:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.626 16:58:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:16.626 16:58:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:16.626 16:58:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:16.626 16:58:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:16.626 16:58:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:16.626 16:58:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.626 16:58:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.626 16:58:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.626 16:58:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:16.626 16:58:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.626 16:58:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.626 16:58:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:16.626 16:58:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.626 16:58:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.626 16:58:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:16.626 16:58:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:16.626 16:58:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.626 16:58:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.626 16:58:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.626 16:58:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.626 16:58:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:16.626 16:58:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.626 16:58:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.626 16:58:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.626 16:58:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:16.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:11:16.626 00:11:16.626 --- 10.0.0.2 ping statistics --- 00:11:16.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.626 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:11:16.626 16:58:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:11:16.626 00:11:16.626 --- 10.0.0.1 ping statistics --- 00:11:16.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.626 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:11:16.626 16:58:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.626 16:58:32 -- nvmf/common.sh@411 -- # return 0 00:11:16.626 16:58:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:16.626 16:58:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.626 16:58:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:16.626 16:58:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:16.626 16:58:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.626 16:58:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:16.626 16:58:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:16.626 16:58:32 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:11:16.626 16:58:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:16.626 16:58:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:16.626 16:58:32 -- common/autotest_common.sh@10 -- # set +x 00:11:16.626 16:58:32 -- nvmf/common.sh@470 -- # nvmfpid=1653470 00:11:16.626 16:58:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:16.626 16:58:32 -- nvmf/common.sh@471 -- # waitforlisten 1653470 00:11:16.626 16:58:32 -- common/autotest_common.sh@817 -- # '[' -z 1653470 ']' 00:11:16.626 16:58:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.626 16:58:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:16.626 16:58:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.626 16:58:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:16.626 16:58:32 -- common/autotest_common.sh@10 -- # set +x 00:11:16.626 [2024-04-18 16:58:32.271334] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:11:16.626 [2024-04-18 16:58:32.271424] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.626 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.930 [2024-04-18 16:58:32.335154] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.930 [2024-04-18 16:58:32.440421] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.930 [2024-04-18 16:58:32.440494] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.930 [2024-04-18 16:58:32.440523] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.930 [2024-04-18 16:58:32.440534] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.930 [2024-04-18 16:58:32.440545] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.930 [2024-04-18 16:58:32.440574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.930 16:58:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:16.930 16:58:32 -- common/autotest_common.sh@850 -- # return 0 00:11:16.930 16:58:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:16.930 16:58:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:16.930 16:58:32 -- common/autotest_common.sh@10 -- # set +x 00:11:16.930 16:58:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.930 16:58:32 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:17.187 [2024-04-18 16:58:32.855428] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.187 16:58:32 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:11:17.187 16:58:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:17.187 16:58:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.187 16:58:32 -- common/autotest_common.sh@10 -- # set +x 00:11:17.444 ************************************ 00:11:17.444 START TEST lvs_grow_clean 00:11:17.444 ************************************ 00:11:17.444 16:58:32 -- common/autotest_common.sh@1111 -- # lvs_grow 00:11:17.444 16:58:32 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:17.444 16:58:32 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:17.444 16:58:32 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:17.444 16:58:32 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:17.444 16:58:32 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:17.444 16:58:32 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:17.444 16:58:32 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:17.444 16:58:32 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:17.444 16:58:32 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:17.702 16:58:33 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:17.702 16:58:33 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:17.960 16:58:33 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c5ace7b1-5791-48c3-b3b7-0ca51b858a7c 00:11:17.960 16:58:33 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5ace7b1-5791-48c3-b3b7-0ca51b858a7c 00:11:17.960 16:58:33 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:18.218 16:58:33 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:18.218 16:58:33 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:18.218 16:58:33 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c5ace7b1-5791-48c3-b3b7-0ca51b858a7c lvol 150 00:11:18.476 16:58:34 -- target/nvmf_lvs_grow.sh@33 -- # lvol=f968667b-cf30-4f39-8ec2-730850630840 00:11:18.476 16:58:34 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:18.476 16:58:34 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:18.734 [2024-04-18 16:58:34.239484] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:18.734 [2024-04-18 16:58:34.239585] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:18.734 true 00:11:18.734 16:58:34 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5ace7b1-5791-48c3-b3b7-0ca51b858a7c 00:11:18.734 16:58:34 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:18.992 16:58:34 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:18.992 16:58:34 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:19.250 16:58:34 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f968667b-cf30-4f39-8ec2-730850630840 00:11:19.509 16:58:34 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:19.509 [2024-04-18 16:58:35.210550] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.766 16:58:35 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:20.025 16:58:35 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1653918 00:11:20.025 16:58:35 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:20.025 16:58:35 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:20.025 16:58:35 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1653918 /var/tmp/bdevperf.sock 00:11:20.025 16:58:35 -- common/autotest_common.sh@817 -- # '[' -z 1653918 ']' 00:11:20.025 16:58:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:20.025 16:58:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:20.025 16:58:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:20.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:20.025 16:58:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:20.025 16:58:35 -- common/autotest_common.sh@10 -- # set +x 00:11:20.025 [2024-04-18 16:58:35.545042] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:11:20.025 [2024-04-18 16:58:35.545127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653918 ] 00:11:20.025 EAL: No free 2048 kB hugepages reported on node 1 00:11:20.025 [2024-04-18 16:58:35.605688] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.025 [2024-04-18 16:58:35.718569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.284 16:58:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:20.284 16:58:35 -- common/autotest_common.sh@850 -- # return 0 00:11:20.284 16:58:35 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:20.850 Nvme0n1 00:11:20.850 16:58:36 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:21.109 [ 00:11:21.109 { 00:11:21.109 "name": "Nvme0n1", 00:11:21.109 "aliases": [ 00:11:21.109 "f968667b-cf30-4f39-8ec2-730850630840" 00:11:21.109 ], 00:11:21.109 "product_name": "NVMe disk", 00:11:21.109 "block_size": 4096, 00:11:21.109 "num_blocks": 38912, 00:11:21.109 "uuid": "f968667b-cf30-4f39-8ec2-730850630840", 00:11:21.109 "assigned_rate_limits": { 00:11:21.109 "rw_ios_per_sec": 0, 00:11:21.109 "rw_mbytes_per_sec": 0, 00:11:21.109 "r_mbytes_per_sec": 0, 00:11:21.109 "w_mbytes_per_sec": 0 00:11:21.109 }, 00:11:21.109 "claimed": false, 00:11:21.109 "zoned": false, 00:11:21.109 "supported_io_types": { 00:11:21.109 "read": true, 00:11:21.109 "write": true, 00:11:21.109 "unmap": true, 00:11:21.109 "write_zeroes": true, 00:11:21.109 "flush": true, 00:11:21.109 "reset": true, 00:11:21.109 "compare": true, 00:11:21.109 "compare_and_write": true, 00:11:21.109 "abort": true, 00:11:21.109 "nvme_admin": true, 00:11:21.109 "nvme_io": true 00:11:21.109 }, 00:11:21.109 "memory_domains": [ 00:11:21.109 { 00:11:21.109 "dma_device_id": "system", 00:11:21.109 "dma_device_type": 1 00:11:21.109 } 00:11:21.109 ], 00:11:21.109 "driver_specific": { 00:11:21.109 "nvme": [ 00:11:21.109 { 00:11:21.109 "trid": { 00:11:21.109 "trtype": "TCP", 00:11:21.109 "adrfam": "IPv4", 00:11:21.109 "traddr": "10.0.0.2", 00:11:21.109 "trsvcid": "4420", 00:11:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:21.109 }, 00:11:21.109 "ctrlr_data": { 00:11:21.109 "cntlid": 1, 00:11:21.110 "vendor_id": "0x8086", 00:11:21.110 "model_number": "SPDK bdev Controller", 00:11:21.110 "serial_number": "SPDK0", 00:11:21.110 "firmware_revision": "24.05", 00:11:21.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:21.110 "oacs": { 00:11:21.110 "security": 0, 00:11:21.110 "format": 0, 00:11:21.110 "firmware": 0, 00:11:21.110 "ns_manage": 0 00:11:21.110 }, 00:11:21.110 "multi_ctrlr": true, 00:11:21.110 "ana_reporting": false 00:11:21.110 }, 00:11:21.110 "vs": { 00:11:21.110 "nvme_version": "1.3" 00:11:21.110 }, 00:11:21.110 "ns_data": { 00:11:21.110 "id": 1, 00:11:21.110 "can_share": true 00:11:21.110 } 00:11:21.110 } 00:11:21.110 ], 00:11:21.110 "mp_policy": "active_passive" 00:11:21.110 } 00:11:21.110 } 00:11:21.110 ] 00:11:21.110 16:58:36 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1654043 00:11:21.110 16:58:36 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:21.110 16:58:36 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:21.110 Running I/O for 10 seconds... 00:11:22.045 Latency(us) 00:11:22.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.045 Nvme0n1 : 1.00 14162.00 55.32 0.00 0.00 0.00 0.00 0.00 00:11:22.045 =================================================================================================================== 00:11:22.045 Total : 14162.00 55.32 0.00 0.00 0.00 0.00 0.00 00:11:22.045 00:11:22.980 16:58:38 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c5ace7b1-5791-48c3-b3b7-0ca51b858a7c 00:11:23.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.239 Nvme0n1 : 2.00 14542.00 56.80 0.00 0.00 0.00 0.00 0.00 00:11:23.239 =================================================================================================================== 00:11:23.239 Total : 14542.00 56.80 0.00 0.00 0.00 0.00 0.00 00:11:23.239 00:11:23.239 true 00:11:23.239 16:58:38 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5ace7b1-5791-48c3-b3b7-0ca51b858a7c 00:11:23.239 16:58:38 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:23.498 16:58:39 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:23.498 16:58:39 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:23.498 16:58:39 -- target/nvmf_lvs_grow.sh@65 -- # wait 1654043 00:11:24.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.065 Nvme0n1 : 3.00 14691.00 57.39 0.00 0.00 0.00 0.00 0.00 00:11:24.065 =================================================================================================================== 00:11:24.065 Total : 14691.00 57.39 0.00 0.00 0.00 0.00 0.00 00:11:24.065 00:11:25.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.439 Nvme0n1 : 4.00 14701.25 57.43 0.00 0.00 0.00 0.00 0.00 00:11:25.439 =================================================================================================================== 00:11:25.439 Total : 14701.25 57.43 0.00 0.00 0.00 0.00 0.00 00:11:25.439 00:11:26.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:26.373 Nvme0n1 : 5.00 14707.40 57.45 0.00 0.00 0.00 0.00 0.00 00:11:26.373 =================================================================================================================== 00:11:26.373 Total : 14707.40 57.45 0.00 0.00 0.00 0.00 0.00 00:11:26.373 00:11:27.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:27.308 Nvme0n1 : 6.00 14711.50 57.47 0.00 0.00 0.00 0.00 0.00 00:11:27.308 =================================================================================================================== 00:11:27.308 Total : 14711.50 57.47 0.00 0.00 0.00 0.00 0.00 00:11:27.308 00:11:28.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:28.242 Nvme0n1 : 7.00 14750.71 57.62 0.00 0.00 0.00 0.00 0.00 00:11:28.242 =================================================================================================================== 00:11:28.242 Total : 14750.71 57.62 0.00 0.00 0.00 0.00 0.00 00:11:28.242 00:11:29.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:29.175 Nvme0n1 : 8.00 14811.88 57.86 0.00 0.00 0.00 0.00 0.00 00:11:29.175 =================================================================================================================== 00:11:29.175 Total : 14811.88 57.86 0.00 0.00 0.00 0.00 0.00 00:11:29.175 00:11:30.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.109 Nvme0n1 : 9.00 14866.67 58.07 0.00 0.00 0.00 0.00 0.00 00:11:30.109 =================================================================================================================== 00:11:30.109 Total : 14866.67 58.07 0.00 0.00 0.00 0.00 0.00 00:11:30.109 00:11:31.043 00:11:31.043 Latency(us) 00:11:31.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.043 Nvme0n1 : 10.00 14888.60 58.16 0.00 0.00 8592.97 3980.71 17087.91 00:11:31.043 =================================================================================================================== 00:11:31.043 Total : 14888.60 58.16 0.00 0.00 8592.97 3980.71 17087.91 00:11:31.043 0 00:11:31.043 16:58:46 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1653918 00:11:31.043 16:58:46 -- common/autotest_common.sh@936 -- # '[' -z 1653918 ']' 00:11:31.043 16:58:46 -- common/autotest_common.sh@940 -- # kill -0 1653918 00:11:31.043 16:58:46 -- common/autotest_common.sh@941 -- # uname 00:11:31.043 16:58:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:31.043 16:58:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1653918 00:11:31.302 16:58:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:31.302 16:58:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:31.302 16:58:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1653918' 00:11:31.302 killing process with pid 1653918 00:11:31.302 16:58:46 -- common/autotest_common.sh@955 -- # kill 1653918 00:11:31.302 Received shutdown signal, test time was about 10.000000 seconds 00:11:31.302 00:11:31.302 Latency(us) 00:11:31.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.302 =================================================================================================================== 00:11:31.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:31.302 16:58:46 -- common/autotest_common.sh@960 -- # wait 1653918 00:11:31.560 16:58:47 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:31.818 16:58:47 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5ace7b1-5791-48c3-b3b7-0ca51b858a7c 00:11:31.818 16:58:47 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:11:31.818 16:58:47 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:11:31.818 16:58:47 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:11:31.818 16:58:47 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:32.076 [2024-04-18 16:58:47.751168] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:32.076 16:58:47 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5ace7b1-5791-48c3-b3b7-0ca51b858a7c 00:11:32.076 16:58:47 -- common/autotest_common.sh@638 -- # local es=0 00:11:32.076 16:58:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5ace7b1-5791-48c3-b3b7-0ca51b858a7c 00:11:32.076 16:58:47 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:32.334 16:58:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:32.334 16:58:47 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:32.334 16:58:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:32.334 16:58:47 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:32.334 16:58:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:32.334 16:58:47 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:32.334 16:58:47 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:32.334 16:58:47 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5ace7b1-5791-48c3-b3b7-0ca51b858a7c 00:11:32.334 request: 00:11:32.335 { 00:11:32.335 "uuid": "c5ace7b1-5791-48c3-b3b7-0ca51b858a7c", 00:11:32.335 "method": "bdev_lvol_get_lvstores", 00:11:32.335 "req_id": 1 00:11:32.335 } 00:11:32.335 Got JSON-RPC error response 00:11:32.335 response: 00:11:32.335 { 00:11:32.335 "code": -19, 00:11:32.335 "message": "No such device" 00:11:32.335 } 00:11:32.335 16:58:48 -- common/autotest_common.sh@641 -- # es=1 00:11:32.335 16:58:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:32.335 16:58:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:32.335 16:58:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:32.335 16:58:48 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:32.593 aio_bdev 00:11:32.593 16:58:48 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev f968667b-cf30-4f39-8ec2-730850630840 00:11:32.593 16:58:48 -- common/autotest_common.sh@885 -- # local bdev_name=f968667b-cf30-4f39-8ec2-730850630840 00:11:32.593 16:58:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:32.593 16:58:48 -- common/autotest_common.sh@887 -- # local i 00:11:32.593 16:58:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:32.593 16:58:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:32.593 16:58:48 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:32.851 16:58:48 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f968667b-cf30-4f39-8ec2-730850630840 -t 2000 00:11:33.110 [ 00:11:33.110 { 00:11:33.110 "name": "f968667b-cf30-4f39-8ec2-730850630840", 00:11:33.110 "aliases": [ 00:11:33.110 "lvs/lvol" 00:11:33.110 ], 00:11:33.110 "product_name": "Logical Volume", 00:11:33.110 "block_size": 4096, 00:11:33.110 "num_blocks": 38912, 00:11:33.110 "uuid": "f968667b-cf30-4f39-8ec2-730850630840", 00:11:33.110 "assigned_rate_limits": { 00:11:33.110 "rw_ios_per_sec": 0, 00:11:33.110 "rw_mbytes_per_sec": 0, 00:11:33.110 "r_mbytes_per_sec": 0, 00:11:33.110 "w_mbytes_per_sec": 0 00:11:33.110 }, 00:11:33.110 "claimed": false, 00:11:33.110 "zoned": false, 00:11:33.110 "supported_io_types": { 00:11:33.110 "read": true, 00:11:33.110 "write": true, 00:11:33.110 "unmap": true, 00:11:33.110 "write_zeroes": true, 00:11:33.110 "flush": false, 00:11:33.110 "reset": true, 00:11:33.110 "compare": false, 00:11:33.110 "compare_and_write": false, 00:11:33.110 "abort": false, 00:11:33.110 "nvme_admin": false, 00:11:33.110 "nvme_io": false 00:11:33.110 }, 00:11:33.110 "driver_specific": { 00:11:33.110 "lvol": { 00:11:33.110 "lvol_store_uuid": "c5ace7b1-5791-48c3-b3b7-0ca51b858a7c", 00:11:33.110 "base_bdev": "aio_bdev", 00:11:33.110 "thin_provision": false, 00:11:33.110 "snapshot": false, 00:11:33.110 "clone": false, 00:11:33.110 "esnap_clone": false 00:11:33.110 } 00:11:33.110 } 00:11:33.110 } 00:11:33.110 ] 00:11:33.110 16:58:48 -- common/autotest_common.sh@893 -- # return 0 00:11:33.110 16:58:48 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5ace7b1-5791-48c3-b3b7-0ca51b858a7c 00:11:33.110 16:58:48 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:11:33.403 16:58:48 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:11:33.403 16:58:48 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5ace7b1-5791-48c3-b3b7-0ca51b858a7c 00:11:33.403 16:58:48 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:11:33.663 16:58:49 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:11:33.663 16:58:49 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f968667b-cf30-4f39-8ec2-730850630840 00:11:33.921 16:58:49 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c5ace7b1-5791-48c3-b3b7-0ca51b858a7c 00:11:34.180 16:58:49 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:34.439 16:58:50 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:34.439 00:11:34.439 real 0m17.117s 00:11:34.439 user 0m16.629s 00:11:34.439 sys 0m1.826s 00:11:34.439 16:58:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:34.439 16:58:50 -- common/autotest_common.sh@10 -- # set +x 00:11:34.439 ************************************ 00:11:34.439 END TEST lvs_grow_clean 00:11:34.439 ************************************ 00:11:34.439 16:58:50 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:34.439 16:58:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:34.439 16:58:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:34.439 16:58:50 -- common/autotest_common.sh@10 -- # set +x 00:11:34.698 ************************************ 00:11:34.698 START TEST lvs_grow_dirty 00:11:34.698 ************************************ 00:11:34.698 16:58:50 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:11:34.698 16:58:50 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:34.698 16:58:50 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:34.698 16:58:50 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:34.698 16:58:50 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:34.698 16:58:50 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:34.698 16:58:50 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:34.698 16:58:50 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:34.698 16:58:50 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:34.698 16:58:50 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:34.956 16:58:50 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:34.956 16:58:50 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:35.214 16:58:50 -- target/nvmf_lvs_grow.sh@28 -- # lvs=d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:35.214 16:58:50 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:35.214 16:58:50 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:35.471 16:58:50 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:35.472 16:58:50 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:35.472 16:58:50 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d18c526a-0a3e-499c-9cef-fb5dd351c978 lvol 150 00:11:35.730 16:58:51 -- target/nvmf_lvs_grow.sh@33 -- # lvol=bbdd0443-732e-4eb3-bdbc-dd4d1193bba7 00:11:35.730 16:58:51 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:35.730 16:58:51 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:35.988 [2024-04-18 16:58:51.508702] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:35.988 [2024-04-18 16:58:51.508806] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:35.988 true 00:11:35.989 16:58:51 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:35.989 16:58:51 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:36.246 16:58:51 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:36.246 16:58:51 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:36.503 16:58:52 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bbdd0443-732e-4eb3-bdbc-dd4d1193bba7 00:11:36.761 16:58:52 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:37.018 16:58:52 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:37.276 16:58:52 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1655977 00:11:37.276 16:58:52 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:37.276 16:58:52 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:37.276 16:58:52 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1655977 /var/tmp/bdevperf.sock 00:11:37.276 16:58:52 -- common/autotest_common.sh@817 -- # '[' -z 1655977 ']' 00:11:37.276 16:58:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:37.276 16:58:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:37.276 16:58:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:37.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:37.276 16:58:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:37.276 16:58:52 -- common/autotest_common.sh@10 -- # set +x 00:11:37.276 [2024-04-18 16:58:52.876573] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:11:37.276 [2024-04-18 16:58:52.876658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655977 ] 00:11:37.276 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.276 [2024-04-18 16:58:52.937377] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.534 [2024-04-18 16:58:53.050878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.534 16:58:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:37.534 16:58:53 -- common/autotest_common.sh@850 -- # return 0 00:11:37.534 16:58:53 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:37.792 Nvme0n1 00:11:38.049 16:58:53 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:38.307 [ 00:11:38.308 { 00:11:38.308 "name": "Nvme0n1", 00:11:38.308 "aliases": [ 00:11:38.308 "bbdd0443-732e-4eb3-bdbc-dd4d1193bba7" 00:11:38.308 ], 00:11:38.308 "product_name": "NVMe disk", 00:11:38.308 "block_size": 4096, 00:11:38.308 "num_blocks": 38912, 00:11:38.308 "uuid": "bbdd0443-732e-4eb3-bdbc-dd4d1193bba7", 00:11:38.308 "assigned_rate_limits": { 00:11:38.308 "rw_ios_per_sec": 0, 00:11:38.308 "rw_mbytes_per_sec": 0, 00:11:38.308 "r_mbytes_per_sec": 0, 00:11:38.308 "w_mbytes_per_sec": 0 00:11:38.308 }, 00:11:38.308 "claimed": false, 00:11:38.308 "zoned": false, 00:11:38.308 "supported_io_types": { 00:11:38.308 "read": true, 00:11:38.308 "write": true, 00:11:38.308 "unmap": true, 00:11:38.308 "write_zeroes": true, 00:11:38.308 "flush": true, 00:11:38.308 "reset": true, 00:11:38.308 "compare": true, 00:11:38.308 "compare_and_write": true, 00:11:38.308 "abort": true, 00:11:38.308 "nvme_admin": true, 00:11:38.308 "nvme_io": true 00:11:38.308 }, 00:11:38.308 "memory_domains": [ 00:11:38.308 { 00:11:38.308 "dma_device_id": "system", 00:11:38.308 "dma_device_type": 1 00:11:38.308 } 00:11:38.308 ], 00:11:38.308 "driver_specific": { 00:11:38.308 "nvme": [ 00:11:38.308 { 00:11:38.308 "trid": { 00:11:38.308 "trtype": "TCP", 00:11:38.308 "adrfam": "IPv4", 00:11:38.308 "traddr": "10.0.0.2", 00:11:38.308 "trsvcid": "4420", 00:11:38.308 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:38.308 }, 00:11:38.308 "ctrlr_data": { 00:11:38.308 "cntlid": 1, 00:11:38.308 "vendor_id": "0x8086", 00:11:38.308 "model_number": "SPDK bdev Controller", 00:11:38.308 "serial_number": "SPDK0", 00:11:38.308 "firmware_revision": "24.05", 00:11:38.308 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:38.308 "oacs": { 00:11:38.308 "security": 0, 00:11:38.308 "format": 0, 00:11:38.308 "firmware": 0, 00:11:38.308 "ns_manage": 0 00:11:38.308 }, 00:11:38.308 "multi_ctrlr": true, 00:11:38.308 "ana_reporting": false 00:11:38.308 }, 00:11:38.308 "vs": { 00:11:38.308 "nvme_version": "1.3" 00:11:38.308 }, 00:11:38.308 "ns_data": { 00:11:38.308 "id": 1, 00:11:38.308 "can_share": true 00:11:38.308 } 00:11:38.308 } 00:11:38.308 ], 00:11:38.308 "mp_policy": "active_passive" 00:11:38.308 } 00:11:38.308 } 00:11:38.308 ] 00:11:38.308 16:58:53 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1656111 00:11:38.308 16:58:53 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:38.308 16:58:53 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:38.308 Running I/O for 10 seconds... 00:11:39.241 Latency(us) 00:11:39.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.241 Nvme0n1 : 1.00 14166.00 55.34 0.00 0.00 0.00 0.00 0.00 00:11:39.241 =================================================================================================================== 00:11:39.241 Total : 14166.00 55.34 0.00 0.00 0.00 0.00 0.00 00:11:39.241 00:11:40.175 16:58:55 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:40.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.434 Nvme0n1 : 2.00 14297.00 55.85 0.00 0.00 0.00 0.00 0.00 00:11:40.434 =================================================================================================================== 00:11:40.434 Total : 14297.00 55.85 0.00 0.00 0.00 0.00 0.00 00:11:40.434 00:11:40.434 true 00:11:40.434 16:58:56 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:40.434 16:58:56 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:40.692 16:58:56 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:40.692 16:58:56 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:40.692 16:58:56 -- target/nvmf_lvs_grow.sh@65 -- # wait 1656111 00:11:41.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.258 Nvme0n1 : 3.00 14386.00 56.20 0.00 0.00 0.00 0.00 0.00 00:11:41.258 =================================================================================================================== 00:11:41.258 Total : 14386.00 56.20 0.00 0.00 0.00 0.00 0.00 00:11:41.258 00:11:42.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.630 Nvme0n1 : 4.00 14474.75 56.54 0.00 0.00 0.00 0.00 0.00 00:11:42.630 =================================================================================================================== 00:11:42.630 Total : 14474.75 56.54 0.00 0.00 0.00 0.00 0.00 00:11:42.630 00:11:43.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:43.563 Nvme0n1 : 5.00 14527.80 56.75 0.00 0.00 0.00 0.00 0.00 00:11:43.563 =================================================================================================================== 00:11:43.563 Total : 14527.80 56.75 0.00 0.00 0.00 0.00 0.00 00:11:43.563 00:11:44.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.496 Nvme0n1 : 6.00 14648.00 57.22 0.00 0.00 0.00 0.00 0.00 00:11:44.496 =================================================================================================================== 00:11:44.496 Total : 14648.00 57.22 0.00 0.00 0.00 0.00 0.00 00:11:44.496 00:11:45.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.431 Nvme0n1 : 7.00 14698.00 57.41 0.00 0.00 0.00 0.00 0.00 00:11:45.431 =================================================================================================================== 00:11:45.431 Total : 14698.00 57.41 0.00 0.00 0.00 0.00 0.00 00:11:45.431 00:11:46.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.216 Nvme0n1 : 8.00 14776.88 57.72 0.00 0.00 0.00 0.00 0.00 00:11:46.216 =================================================================================================================== 00:11:46.216 Total : 14776.88 57.72 0.00 0.00 0.00 0.00 0.00 00:11:46.216 00:11:47.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.590 Nvme0n1 : 9.00 14800.78 57.82 0.00 0.00 0.00 0.00 0.00 00:11:47.590 =================================================================================================================== 00:11:47.590 Total : 14800.78 57.82 0.00 0.00 0.00 0.00 0.00 00:11:47.590 00:11:48.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.523 Nvme0n1 : 10.00 14819.90 57.89 0.00 0.00 0.00 0.00 0.00 00:11:48.523 =================================================================================================================== 00:11:48.523 Total : 14819.90 57.89 0.00 0.00 0.00 0.00 0.00 00:11:48.523 00:11:48.523 00:11:48.523 Latency(us) 00:11:48.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.523 Nvme0n1 : 10.01 14821.40 57.90 0.00 0.00 8631.52 5024.43 18155.90 00:11:48.523 =================================================================================================================== 00:11:48.523 Total : 14821.40 57.90 0.00 0.00 8631.52 5024.43 18155.90 00:11:48.523 0 00:11:48.524 16:59:03 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1655977 00:11:48.524 16:59:03 -- common/autotest_common.sh@936 -- # '[' -z 1655977 ']' 00:11:48.524 16:59:03 -- common/autotest_common.sh@940 -- # kill -0 1655977 00:11:48.524 16:59:03 -- common/autotest_common.sh@941 -- # uname 00:11:48.524 16:59:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.524 16:59:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1655977 00:11:48.524 16:59:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:48.524 16:59:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:48.524 16:59:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1655977' 00:11:48.524 killing process with pid 1655977 00:11:48.524 16:59:03 -- common/autotest_common.sh@955 -- # kill 1655977 00:11:48.524 Received shutdown signal, test time was about 10.000000 seconds 00:11:48.524 00:11:48.524 Latency(us) 00:11:48.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.524 =================================================================================================================== 00:11:48.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:48.524 16:59:03 -- common/autotest_common.sh@960 -- # wait 1655977 00:11:48.524 16:59:04 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:48.780 16:59:04 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:48.780 16:59:04 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:11:49.037 16:59:04 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:11:49.037 16:59:04 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:11:49.037 16:59:04 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1653470 00:11:49.037 16:59:04 -- target/nvmf_lvs_grow.sh@74 -- # wait 1653470 00:11:49.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1653470 Killed "${NVMF_APP[@]}" "$@" 00:11:49.037 16:59:04 -- target/nvmf_lvs_grow.sh@74 -- # true 00:11:49.037 16:59:04 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:11:49.037 16:59:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:49.037 16:59:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:49.037 16:59:04 -- common/autotest_common.sh@10 -- # set +x 00:11:49.037 16:59:04 -- nvmf/common.sh@470 -- # nvmfpid=1657319 00:11:49.037 16:59:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:49.037 16:59:04 -- nvmf/common.sh@471 -- # waitforlisten 1657319 00:11:49.037 16:59:04 -- common/autotest_common.sh@817 -- # '[' -z 1657319 ']' 00:11:49.037 16:59:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.037 16:59:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:49.037 16:59:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.037 16:59:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:49.037 16:59:04 -- common/autotest_common.sh@10 -- # set +x 00:11:49.295 [2024-04-18 16:59:04.778526] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:11:49.295 [2024-04-18 16:59:04.778622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.295 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.295 [2024-04-18 16:59:04.849834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.295 [2024-04-18 16:59:04.967732] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.295 [2024-04-18 16:59:04.967790] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.295 [2024-04-18 16:59:04.967819] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.295 [2024-04-18 16:59:04.967830] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.295 [2024-04-18 16:59:04.967841] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.295 [2024-04-18 16:59:04.967868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.553 16:59:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:49.553 16:59:05 -- common/autotest_common.sh@850 -- # return 0 00:11:49.553 16:59:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:49.553 16:59:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:49.553 16:59:05 -- common/autotest_common.sh@10 -- # set +x 00:11:49.553 16:59:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.553 16:59:05 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:49.811 [2024-04-18 16:59:05.342977] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:49.811 [2024-04-18 16:59:05.343119] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:49.811 [2024-04-18 16:59:05.343175] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:49.811 16:59:05 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:11:49.811 16:59:05 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev bbdd0443-732e-4eb3-bdbc-dd4d1193bba7 00:11:49.811 16:59:05 -- common/autotest_common.sh@885 -- # local bdev_name=bbdd0443-732e-4eb3-bdbc-dd4d1193bba7 00:11:49.811 16:59:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:49.811 16:59:05 -- common/autotest_common.sh@887 -- # local i 00:11:49.811 16:59:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:49.811 16:59:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:49.811 16:59:05 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:50.074 16:59:05 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bbdd0443-732e-4eb3-bdbc-dd4d1193bba7 -t 2000 00:11:50.378 [ 00:11:50.378 { 00:11:50.378 "name": "bbdd0443-732e-4eb3-bdbc-dd4d1193bba7", 00:11:50.378 "aliases": [ 00:11:50.378 "lvs/lvol" 00:11:50.378 ], 00:11:50.378 "product_name": "Logical Volume", 00:11:50.378 "block_size": 4096, 00:11:50.378 "num_blocks": 38912, 00:11:50.378 "uuid": "bbdd0443-732e-4eb3-bdbc-dd4d1193bba7", 00:11:50.378 "assigned_rate_limits": { 00:11:50.378 "rw_ios_per_sec": 0, 00:11:50.378 "rw_mbytes_per_sec": 0, 00:11:50.378 "r_mbytes_per_sec": 0, 00:11:50.378 "w_mbytes_per_sec": 0 00:11:50.378 }, 00:11:50.378 "claimed": false, 00:11:50.378 "zoned": false, 00:11:50.378 "supported_io_types": { 00:11:50.378 "read": true, 00:11:50.378 "write": true, 00:11:50.378 "unmap": true, 00:11:50.378 "write_zeroes": true, 00:11:50.378 "flush": false, 00:11:50.378 "reset": true, 00:11:50.378 "compare": false, 00:11:50.378 "compare_and_write": false, 00:11:50.378 "abort": false, 00:11:50.378 "nvme_admin": false, 00:11:50.378 "nvme_io": false 00:11:50.378 }, 00:11:50.378 "driver_specific": { 00:11:50.378 "lvol": { 00:11:50.378 "lvol_store_uuid": "d18c526a-0a3e-499c-9cef-fb5dd351c978", 00:11:50.378 "base_bdev": "aio_bdev", 00:11:50.378 "thin_provision": false, 00:11:50.378 "snapshot": false, 00:11:50.378 "clone": false, 00:11:50.378 "esnap_clone": false 00:11:50.378 } 00:11:50.378 } 00:11:50.378 } 00:11:50.378 ] 00:11:50.378 16:59:05 -- common/autotest_common.sh@893 -- # return 0 00:11:50.378 16:59:05 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:50.378 16:59:05 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:11:50.636 16:59:06 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:11:50.636 16:59:06 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:50.636 16:59:06 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:11:50.894 16:59:06 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:11:50.894 16:59:06 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:51.153 [2024-04-18 16:59:06.684022] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:51.153 16:59:06 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:51.153 16:59:06 -- common/autotest_common.sh@638 -- # local es=0 00:11:51.153 16:59:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:51.153 16:59:06 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.153 16:59:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:51.153 16:59:06 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.153 16:59:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:51.153 16:59:06 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.153 16:59:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:51.153 16:59:06 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.153 16:59:06 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:51.153 16:59:06 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:51.411 request: 00:11:51.411 { 00:11:51.411 "uuid": "d18c526a-0a3e-499c-9cef-fb5dd351c978", 00:11:51.411 "method": "bdev_lvol_get_lvstores", 00:11:51.411 "req_id": 1 00:11:51.411 } 00:11:51.411 Got JSON-RPC error response 00:11:51.411 response: 00:11:51.411 { 00:11:51.411 "code": -19, 00:11:51.411 "message": "No such device" 00:11:51.411 } 00:11:51.411 16:59:06 -- common/autotest_common.sh@641 -- # es=1 00:11:51.411 16:59:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:51.411 16:59:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:51.411 16:59:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:51.411 16:59:06 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:51.670 aio_bdev 00:11:51.670 16:59:07 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev bbdd0443-732e-4eb3-bdbc-dd4d1193bba7 00:11:51.670 16:59:07 -- common/autotest_common.sh@885 -- # local bdev_name=bbdd0443-732e-4eb3-bdbc-dd4d1193bba7 00:11:51.670 16:59:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:51.670 16:59:07 -- common/autotest_common.sh@887 -- # local i 00:11:51.670 16:59:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:51.670 16:59:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:51.670 16:59:07 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:51.928 16:59:07 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bbdd0443-732e-4eb3-bdbc-dd4d1193bba7 -t 2000 00:11:52.186 [ 00:11:52.186 { 00:11:52.186 "name": "bbdd0443-732e-4eb3-bdbc-dd4d1193bba7", 00:11:52.186 "aliases": [ 00:11:52.186 "lvs/lvol" 00:11:52.186 ], 00:11:52.186 "product_name": "Logical Volume", 00:11:52.186 "block_size": 4096, 00:11:52.186 "num_blocks": 38912, 00:11:52.186 "uuid": "bbdd0443-732e-4eb3-bdbc-dd4d1193bba7", 00:11:52.186 "assigned_rate_limits": { 00:11:52.186 "rw_ios_per_sec": 0, 00:11:52.186 "rw_mbytes_per_sec": 0, 00:11:52.186 "r_mbytes_per_sec": 0, 00:11:52.186 "w_mbytes_per_sec": 0 00:11:52.186 }, 00:11:52.186 "claimed": false, 00:11:52.186 "zoned": false, 00:11:52.186 "supported_io_types": { 00:11:52.186 "read": true, 00:11:52.186 "write": true, 00:11:52.186 "unmap": true, 00:11:52.186 "write_zeroes": true, 00:11:52.186 "flush": false, 00:11:52.186 "reset": true, 00:11:52.186 "compare": false, 00:11:52.186 "compare_and_write": false, 00:11:52.187 "abort": false, 00:11:52.187 "nvme_admin": false, 00:11:52.187 "nvme_io": false 00:11:52.187 }, 00:11:52.187 "driver_specific": { 00:11:52.187 "lvol": { 00:11:52.187 "lvol_store_uuid": "d18c526a-0a3e-499c-9cef-fb5dd351c978", 00:11:52.187 "base_bdev": "aio_bdev", 00:11:52.187 "thin_provision": false, 00:11:52.187 "snapshot": false, 00:11:52.187 "clone": false, 00:11:52.187 "esnap_clone": false 00:11:52.187 } 00:11:52.187 } 00:11:52.187 } 00:11:52.187 ] 00:11:52.187 16:59:07 -- common/autotest_common.sh@893 -- # return 0 00:11:52.187 16:59:07 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:52.187 16:59:07 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:11:52.445 16:59:07 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:11:52.445 16:59:07 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:52.445 16:59:07 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:11:52.703 16:59:08 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:11:52.703 16:59:08 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bbdd0443-732e-4eb3-bdbc-dd4d1193bba7 00:11:52.961 16:59:08 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d18c526a-0a3e-499c-9cef-fb5dd351c978 00:11:53.220 16:59:08 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:53.478 16:59:08 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:53.478 00:11:53.478 real 0m18.800s 00:11:53.478 user 0m47.736s 00:11:53.478 sys 0m4.507s 00:11:53.478 16:59:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:53.478 16:59:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.478 ************************************ 00:11:53.478 END TEST lvs_grow_dirty 00:11:53.478 ************************************ 00:11:53.478 16:59:09 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:53.478 16:59:09 -- common/autotest_common.sh@794 -- # type=--id 00:11:53.478 16:59:09 -- common/autotest_common.sh@795 -- # id=0 00:11:53.478 16:59:09 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:11:53.478 16:59:09 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:53.478 16:59:09 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:11:53.478 16:59:09 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:11:53.478 16:59:09 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:11:53.478 16:59:09 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:53.478 nvmf_trace.0 00:11:53.478 16:59:09 -- common/autotest_common.sh@809 -- # return 0 00:11:53.478 16:59:09 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:53.478 16:59:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:53.478 16:59:09 -- nvmf/common.sh@117 -- # sync 00:11:53.478 16:59:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:53.478 16:59:09 -- nvmf/common.sh@120 -- # set +e 00:11:53.478 16:59:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:53.478 16:59:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:53.478 rmmod nvme_tcp 00:11:53.478 rmmod nvme_fabrics 00:11:53.478 rmmod nvme_keyring 00:11:53.478 16:59:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:53.478 16:59:09 -- nvmf/common.sh@124 -- # set -e 00:11:53.478 16:59:09 -- nvmf/common.sh@125 -- # return 0 00:11:53.478 16:59:09 -- nvmf/common.sh@478 -- # '[' -n 1657319 ']' 00:11:53.478 16:59:09 -- nvmf/common.sh@479 -- # killprocess 1657319 00:11:53.478 16:59:09 -- common/autotest_common.sh@936 -- # '[' -z 1657319 ']' 00:11:53.478 16:59:09 -- common/autotest_common.sh@940 -- # kill -0 1657319 00:11:53.478 16:59:09 -- common/autotest_common.sh@941 -- # uname 00:11:53.478 16:59:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:53.478 16:59:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1657319 00:11:53.478 16:59:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:53.478 16:59:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:53.478 16:59:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1657319' 00:11:53.478 killing process with pid 1657319 00:11:53.478 16:59:09 -- common/autotest_common.sh@955 -- # kill 1657319 00:11:53.478 16:59:09 -- common/autotest_common.sh@960 -- # wait 1657319 00:11:54.046 16:59:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:54.046 16:59:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:54.046 16:59:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:54.046 16:59:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.046 16:59:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:54.046 16:59:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.046 16:59:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.046 16:59:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.954 16:59:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:55.954 00:11:55.954 real 0m41.468s 00:11:55.954 user 1m10.178s 00:11:55.954 sys 0m8.301s 00:11:55.954 16:59:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:55.954 16:59:11 -- common/autotest_common.sh@10 -- # set +x 00:11:55.954 ************************************ 00:11:55.954 END TEST nvmf_lvs_grow 00:11:55.954 ************************************ 00:11:55.954 16:59:11 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:55.954 16:59:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:55.954 16:59:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:55.954 16:59:11 -- common/autotest_common.sh@10 -- # set +x 00:11:55.954 ************************************ 00:11:55.954 START TEST nvmf_bdev_io_wait 00:11:55.954 ************************************ 00:11:55.954 16:59:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:56.213 * Looking for test storage... 00:11:56.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.213 16:59:11 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.213 16:59:11 -- nvmf/common.sh@7 -- # uname -s 00:11:56.213 16:59:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.213 16:59:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.213 16:59:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.213 16:59:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.213 16:59:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.213 16:59:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.213 16:59:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.213 16:59:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.213 16:59:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.213 16:59:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.213 16:59:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.213 16:59:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.213 16:59:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.213 16:59:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.213 16:59:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.213 16:59:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.213 16:59:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.213 16:59:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.213 16:59:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.213 16:59:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.213 16:59:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.213 16:59:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.213 16:59:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.213 16:59:11 -- paths/export.sh@5 -- # export PATH 00:11:56.214 16:59:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.214 16:59:11 -- nvmf/common.sh@47 -- # : 0 00:11:56.214 16:59:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:56.214 16:59:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:56.214 16:59:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.214 16:59:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.214 16:59:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.214 16:59:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:56.214 16:59:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:56.214 16:59:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:56.214 16:59:11 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.214 16:59:11 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.214 16:59:11 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:56.214 16:59:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:56.214 16:59:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.214 16:59:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:56.214 16:59:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:56.214 16:59:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:56.214 16:59:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.214 16:59:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.214 16:59:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.214 16:59:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:56.214 16:59:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:56.214 16:59:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:56.214 16:59:11 -- common/autotest_common.sh@10 -- # set +x 00:11:58.119 16:59:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:58.119 16:59:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:58.119 16:59:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:58.119 16:59:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:58.119 16:59:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:58.119 16:59:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:58.119 16:59:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:58.119 16:59:13 -- nvmf/common.sh@295 -- # net_devs=() 00:11:58.119 16:59:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:58.119 16:59:13 -- nvmf/common.sh@296 -- # e810=() 00:11:58.119 16:59:13 -- nvmf/common.sh@296 -- # local -ga e810 00:11:58.119 16:59:13 -- nvmf/common.sh@297 -- # x722=() 00:11:58.119 16:59:13 -- nvmf/common.sh@297 -- # local -ga x722 00:11:58.119 16:59:13 -- nvmf/common.sh@298 -- # mlx=() 00:11:58.119 16:59:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:58.119 16:59:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.119 16:59:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.119 16:59:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.119 16:59:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.119 16:59:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.119 16:59:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.119 16:59:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.119 16:59:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.119 16:59:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.119 16:59:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.119 16:59:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.119 16:59:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:58.119 16:59:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:58.119 16:59:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:58.119 16:59:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:58.119 16:59:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:58.119 16:59:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:58.119 16:59:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.119 16:59:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:58.119 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:58.119 16:59:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.119 16:59:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.120 16:59:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:58.120 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:58.120 16:59:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:58.120 16:59:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.120 16:59:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.120 16:59:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:58.120 16:59:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.120 16:59:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:58.120 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:58.120 16:59:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.120 16:59:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.120 16:59:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.120 16:59:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:58.120 16:59:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.120 16:59:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:58.120 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:58.120 16:59:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.120 16:59:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:58.120 16:59:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:58.120 16:59:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:58.120 16:59:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.120 16:59:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.120 16:59:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.120 16:59:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:58.120 16:59:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.120 16:59:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.120 16:59:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:58.120 16:59:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.120 16:59:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.120 16:59:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:58.120 16:59:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:58.120 16:59:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.120 16:59:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.120 16:59:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.120 16:59:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.120 16:59:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:58.120 16:59:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.120 16:59:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.120 16:59:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.120 16:59:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:58.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:11:58.120 00:11:58.120 --- 10.0.0.2 ping statistics --- 00:11:58.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.120 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:11:58.120 16:59:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:11:58.120 00:11:58.120 --- 10.0.0.1 ping statistics --- 00:11:58.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.120 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:11:58.120 16:59:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.120 16:59:13 -- nvmf/common.sh@411 -- # return 0 00:11:58.120 16:59:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:58.120 16:59:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.120 16:59:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:58.120 16:59:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.120 16:59:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:58.120 16:59:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:58.120 16:59:13 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:58.120 16:59:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:58.120 16:59:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:58.120 16:59:13 -- common/autotest_common.sh@10 -- # set +x 00:11:58.120 16:59:13 -- nvmf/common.sh@470 -- # nvmfpid=1659849 00:11:58.120 16:59:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:58.120 16:59:13 -- nvmf/common.sh@471 -- # waitforlisten 1659849 00:11:58.120 16:59:13 -- common/autotest_common.sh@817 -- # '[' -z 1659849 ']' 00:11:58.120 16:59:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.120 16:59:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:58.120 16:59:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.120 16:59:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:58.120 16:59:13 -- common/autotest_common.sh@10 -- # set +x 00:11:58.120 [2024-04-18 16:59:13.809971] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:11:58.120 [2024-04-18 16:59:13.810046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.378 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.378 [2024-04-18 16:59:13.878709] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.378 [2024-04-18 16:59:13.996204] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.378 [2024-04-18 16:59:13.996257] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.378 [2024-04-18 16:59:13.996272] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.378 [2024-04-18 16:59:13.996284] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.378 [2024-04-18 16:59:13.996294] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.378 [2024-04-18 16:59:13.996353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.378 [2024-04-18 16:59:13.996422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.378 [2024-04-18 16:59:13.996450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.378 [2024-04-18 16:59:13.996453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.312 16:59:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:59.312 16:59:14 -- common/autotest_common.sh@850 -- # return 0 00:11:59.312 16:59:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:59.312 16:59:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:59.313 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.313 16:59:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:59.313 16:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.313 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.313 16:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:59.313 16:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.313 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.313 16:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:59.313 16:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.313 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.313 [2024-04-18 16:59:14.846861] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.313 16:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:59.313 16:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.313 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.313 Malloc0 00:11:59.313 16:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:59.313 16:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.313 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.313 16:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:59.313 16:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.313 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.313 16:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.313 16:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.313 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.313 [2024-04-18 16:59:14.909302] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.313 16:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1660004 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@30 -- # READ_PID=1660006 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:59.313 16:59:14 -- nvmf/common.sh@521 -- # config=() 00:11:59.313 16:59:14 -- nvmf/common.sh@521 -- # local subsystem config 00:11:59.313 16:59:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1660008 00:11:59.313 16:59:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:59.313 { 00:11:59.313 "params": { 00:11:59.313 "name": "Nvme$subsystem", 00:11:59.313 "trtype": "$TEST_TRANSPORT", 00:11:59.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:59.313 "adrfam": "ipv4", 00:11:59.313 "trsvcid": "$NVMF_PORT", 00:11:59.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:59.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:59.313 "hdgst": ${hdgst:-false}, 00:11:59.313 "ddgst": ${ddgst:-false} 00:11:59.313 }, 00:11:59.313 "method": "bdev_nvme_attach_controller" 00:11:59.313 } 00:11:59.313 EOF 00:11:59.313 )") 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:59.313 16:59:14 -- nvmf/common.sh@521 -- # config=() 00:11:59.313 16:59:14 -- nvmf/common.sh@521 -- # local subsystem config 00:11:59.313 16:59:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:59.313 16:59:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:59.313 { 00:11:59.313 "params": { 00:11:59.313 "name": "Nvme$subsystem", 00:11:59.313 "trtype": "$TEST_TRANSPORT", 00:11:59.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:59.313 "adrfam": "ipv4", 00:11:59.313 "trsvcid": "$NVMF_PORT", 00:11:59.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:59.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:59.313 "hdgst": ${hdgst:-false}, 00:11:59.313 "ddgst": ${ddgst:-false} 00:11:59.313 }, 00:11:59.313 "method": "bdev_nvme_attach_controller" 00:11:59.313 } 00:11:59.313 EOF 00:11:59.313 )") 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1660010 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@35 -- # sync 00:11:59.313 16:59:14 -- nvmf/common.sh@521 -- # config=() 00:11:59.313 16:59:14 -- nvmf/common.sh@521 -- # local subsystem config 00:11:59.313 16:59:14 -- nvmf/common.sh@543 -- # cat 00:11:59.313 16:59:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:59.313 16:59:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:59.313 { 00:11:59.313 "params": { 00:11:59.313 "name": "Nvme$subsystem", 00:11:59.313 "trtype": "$TEST_TRANSPORT", 00:11:59.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:59.313 "adrfam": "ipv4", 00:11:59.313 "trsvcid": "$NVMF_PORT", 00:11:59.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:59.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:59.313 "hdgst": ${hdgst:-false}, 00:11:59.313 "ddgst": ${ddgst:-false} 00:11:59.313 }, 00:11:59.313 "method": "bdev_nvme_attach_controller" 00:11:59.313 } 00:11:59.313 EOF 00:11:59.313 )") 00:11:59.313 16:59:14 -- nvmf/common.sh@521 -- # config=() 00:11:59.313 16:59:14 -- nvmf/common.sh@521 -- # local subsystem config 00:11:59.313 16:59:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:59.313 16:59:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:59.313 { 00:11:59.313 "params": { 00:11:59.313 "name": "Nvme$subsystem", 00:11:59.313 "trtype": "$TEST_TRANSPORT", 00:11:59.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:59.313 "adrfam": "ipv4", 00:11:59.313 "trsvcid": "$NVMF_PORT", 00:11:59.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:59.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:59.313 "hdgst": ${hdgst:-false}, 00:11:59.313 "ddgst": ${ddgst:-false} 00:11:59.313 }, 00:11:59.313 "method": "bdev_nvme_attach_controller" 00:11:59.313 } 00:11:59.313 EOF 00:11:59.313 )") 00:11:59.313 16:59:14 -- nvmf/common.sh@543 -- # cat 00:11:59.313 16:59:14 -- nvmf/common.sh@543 -- # cat 00:11:59.313 16:59:14 -- nvmf/common.sh@543 -- # cat 00:11:59.313 16:59:14 -- target/bdev_io_wait.sh@37 -- # wait 1660004 00:11:59.313 16:59:14 -- nvmf/common.sh@545 -- # jq . 00:11:59.313 16:59:14 -- nvmf/common.sh@545 -- # jq . 00:11:59.313 16:59:14 -- nvmf/common.sh@545 -- # jq . 00:11:59.313 16:59:14 -- nvmf/common.sh@546 -- # IFS=, 00:11:59.313 16:59:14 -- nvmf/common.sh@545 -- # jq . 00:11:59.313 16:59:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:59.313 "params": { 00:11:59.313 "name": "Nvme1", 00:11:59.313 "trtype": "tcp", 00:11:59.313 "traddr": "10.0.0.2", 00:11:59.313 "adrfam": "ipv4", 00:11:59.313 "trsvcid": "4420", 00:11:59.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:59.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:59.313 "hdgst": false, 00:11:59.313 "ddgst": false 00:11:59.313 }, 00:11:59.313 "method": "bdev_nvme_attach_controller" 00:11:59.313 }' 00:11:59.313 16:59:14 -- nvmf/common.sh@546 -- # IFS=, 00:11:59.313 16:59:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:59.313 "params": { 00:11:59.313 "name": "Nvme1", 00:11:59.313 "trtype": "tcp", 00:11:59.313 "traddr": "10.0.0.2", 00:11:59.313 "adrfam": "ipv4", 00:11:59.313 "trsvcid": "4420", 00:11:59.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:59.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:59.313 "hdgst": false, 00:11:59.313 "ddgst": false 00:11:59.313 }, 00:11:59.313 "method": "bdev_nvme_attach_controller" 00:11:59.313 }' 00:11:59.313 16:59:14 -- nvmf/common.sh@546 -- # IFS=, 00:11:59.313 16:59:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:59.313 "params": { 00:11:59.313 "name": "Nvme1", 00:11:59.313 "trtype": "tcp", 00:11:59.313 "traddr": "10.0.0.2", 00:11:59.313 "adrfam": "ipv4", 00:11:59.313 "trsvcid": "4420", 00:11:59.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:59.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:59.313 "hdgst": false, 00:11:59.313 "ddgst": false 00:11:59.313 }, 00:11:59.313 "method": "bdev_nvme_attach_controller" 00:11:59.313 }' 00:11:59.313 16:59:14 -- nvmf/common.sh@546 -- # IFS=, 00:11:59.313 16:59:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:59.313 "params": { 00:11:59.313 "name": "Nvme1", 00:11:59.313 "trtype": "tcp", 00:11:59.313 "traddr": "10.0.0.2", 00:11:59.313 "adrfam": "ipv4", 00:11:59.313 "trsvcid": "4420", 00:11:59.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:59.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:59.313 "hdgst": false, 00:11:59.314 "ddgst": false 00:11:59.314 }, 00:11:59.314 "method": "bdev_nvme_attach_controller" 00:11:59.314 }' 00:11:59.314 [2024-04-18 16:59:14.956840] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:11:59.314 [2024-04-18 16:59:14.956840] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:11:59.314 [2024-04-18 16:59:14.956841] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:11:59.314 [2024-04-18 16:59:14.956934] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-18 16:59:14.956934] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-18 16:59:14.956934] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:59.314 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:59.314 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:59.314 [2024-04-18 16:59:14.957090] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:11:59.314 [2024-04-18 16:59:14.957160] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:59.314 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.572 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.572 [2024-04-18 16:59:15.133410] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.572 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.572 [2024-04-18 16:59:15.229683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:59.572 [2024-04-18 16:59:15.237340] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.830 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.830 [2024-04-18 16:59:15.333836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:59.830 [2024-04-18 16:59:15.362142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.830 [2024-04-18 16:59:15.418618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.830 [2024-04-18 16:59:15.463620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:59.830 [2024-04-18 16:59:15.507680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:00.088 Running I/O for 1 seconds... 00:12:00.088 Running I/O for 1 seconds... 00:12:00.088 Running I/O for 1 seconds... 00:12:00.088 Running I/O for 1 seconds... 00:12:01.025 00:12:01.025 Latency(us) 00:12:01.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.025 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:01.025 Nvme1n1 : 1.01 11326.93 44.25 0.00 0.00 11253.97 7330.32 19515.16 00:12:01.025 =================================================================================================================== 00:12:01.025 Total : 11326.93 44.25 0.00 0.00 11253.97 7330.32 19515.16 00:12:01.025 00:12:01.025 Latency(us) 00:12:01.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.025 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:01.025 Nvme1n1 : 1.02 5454.34 21.31 0.00 0.00 23128.54 8204.14 35923.44 00:12:01.025 =================================================================================================================== 00:12:01.025 Total : 5454.34 21.31 0.00 0.00 23128.54 8204.14 35923.44 00:12:01.025 00:12:01.025 Latency(us) 00:12:01.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.025 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:01.025 Nvme1n1 : 1.00 170900.01 667.58 0.00 0.00 746.08 292.79 1098.33 00:12:01.025 =================================================================================================================== 00:12:01.025 Total : 170900.01 667.58 0.00 0.00 746.08 292.79 1098.33 00:12:01.284 00:12:01.284 Latency(us) 00:12:01.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.284 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:01.284 Nvme1n1 : 1.01 5810.35 22.70 0.00 0.00 21943.01 7475.96 50098.63 00:12:01.284 =================================================================================================================== 00:12:01.284 Total : 5810.35 22.70 0.00 0.00 21943.01 7475.96 50098.63 00:12:01.284 16:59:16 -- target/bdev_io_wait.sh@38 -- # wait 1660006 00:12:01.284 16:59:16 -- target/bdev_io_wait.sh@39 -- # wait 1660008 00:12:01.542 16:59:17 -- target/bdev_io_wait.sh@40 -- # wait 1660010 00:12:01.542 16:59:17 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.542 16:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.542 16:59:17 -- common/autotest_common.sh@10 -- # set +x 00:12:01.542 16:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.542 16:59:17 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:01.542 16:59:17 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:01.542 16:59:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:01.542 16:59:17 -- nvmf/common.sh@117 -- # sync 00:12:01.542 16:59:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.542 16:59:17 -- nvmf/common.sh@120 -- # set +e 00:12:01.542 16:59:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.542 16:59:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.542 rmmod nvme_tcp 00:12:01.542 rmmod nvme_fabrics 00:12:01.542 rmmod nvme_keyring 00:12:01.542 16:59:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.542 16:59:17 -- nvmf/common.sh@124 -- # set -e 00:12:01.542 16:59:17 -- nvmf/common.sh@125 -- # return 0 00:12:01.542 16:59:17 -- nvmf/common.sh@478 -- # '[' -n 1659849 ']' 00:12:01.542 16:59:17 -- nvmf/common.sh@479 -- # killprocess 1659849 00:12:01.542 16:59:17 -- common/autotest_common.sh@936 -- # '[' -z 1659849 ']' 00:12:01.542 16:59:17 -- common/autotest_common.sh@940 -- # kill -0 1659849 00:12:01.542 16:59:17 -- common/autotest_common.sh@941 -- # uname 00:12:01.542 16:59:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:01.542 16:59:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1659849 00:12:01.542 16:59:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:01.542 16:59:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:01.542 16:59:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1659849' 00:12:01.542 killing process with pid 1659849 00:12:01.542 16:59:17 -- common/autotest_common.sh@955 -- # kill 1659849 00:12:01.542 16:59:17 -- common/autotest_common.sh@960 -- # wait 1659849 00:12:01.801 16:59:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:01.801 16:59:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:01.801 16:59:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:01.801 16:59:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.801 16:59:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.801 16:59:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.801 16:59:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.801 16:59:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.337 16:59:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:04.337 00:12:04.337 real 0m7.835s 00:12:04.337 user 0m19.228s 00:12:04.337 sys 0m3.657s 00:12:04.337 16:59:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:04.337 16:59:19 -- common/autotest_common.sh@10 -- # set +x 00:12:04.337 ************************************ 00:12:04.337 END TEST nvmf_bdev_io_wait 00:12:04.337 ************************************ 00:12:04.337 16:59:19 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:04.337 16:59:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:04.337 16:59:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:04.337 16:59:19 -- common/autotest_common.sh@10 -- # set +x 00:12:04.337 ************************************ 00:12:04.337 START TEST nvmf_queue_depth 00:12:04.337 ************************************ 00:12:04.337 16:59:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:04.337 * Looking for test storage... 00:12:04.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.337 16:59:19 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.337 16:59:19 -- nvmf/common.sh@7 -- # uname -s 00:12:04.337 16:59:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.337 16:59:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.337 16:59:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.337 16:59:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.337 16:59:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.337 16:59:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.337 16:59:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.337 16:59:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.337 16:59:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.337 16:59:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.337 16:59:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.337 16:59:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.337 16:59:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.337 16:59:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.337 16:59:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.337 16:59:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.337 16:59:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.337 16:59:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.337 16:59:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.337 16:59:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.337 16:59:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.337 16:59:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.337 16:59:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.337 16:59:19 -- paths/export.sh@5 -- # export PATH 00:12:04.337 16:59:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.337 16:59:19 -- nvmf/common.sh@47 -- # : 0 00:12:04.337 16:59:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.337 16:59:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.337 16:59:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.337 16:59:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.337 16:59:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.337 16:59:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.337 16:59:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.337 16:59:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.337 16:59:19 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:04.337 16:59:19 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:04.337 16:59:19 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:04.337 16:59:19 -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:04.337 16:59:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:04.337 16:59:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.337 16:59:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:04.337 16:59:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:04.337 16:59:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:04.337 16:59:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.337 16:59:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.337 16:59:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.337 16:59:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:04.337 16:59:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:04.337 16:59:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:04.337 16:59:19 -- common/autotest_common.sh@10 -- # set +x 00:12:06.239 16:59:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:06.239 16:59:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:06.239 16:59:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:06.239 16:59:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:06.239 16:59:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:06.239 16:59:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:06.239 16:59:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:06.239 16:59:21 -- nvmf/common.sh@295 -- # net_devs=() 00:12:06.239 16:59:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:06.239 16:59:21 -- nvmf/common.sh@296 -- # e810=() 00:12:06.239 16:59:21 -- nvmf/common.sh@296 -- # local -ga e810 00:12:06.239 16:59:21 -- nvmf/common.sh@297 -- # x722=() 00:12:06.239 16:59:21 -- nvmf/common.sh@297 -- # local -ga x722 00:12:06.239 16:59:21 -- nvmf/common.sh@298 -- # mlx=() 00:12:06.239 16:59:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:06.239 16:59:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.239 16:59:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.239 16:59:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.239 16:59:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.239 16:59:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.239 16:59:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.239 16:59:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.239 16:59:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.239 16:59:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.239 16:59:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.239 16:59:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.239 16:59:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:06.239 16:59:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:06.239 16:59:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:06.239 16:59:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:06.239 16:59:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:06.239 16:59:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:06.239 16:59:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.239 16:59:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:06.239 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:06.240 16:59:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.240 16:59:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:06.240 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:06.240 16:59:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:06.240 16:59:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.240 16:59:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.240 16:59:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:06.240 16:59:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.240 16:59:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:06.240 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:06.240 16:59:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.240 16:59:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.240 16:59:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.240 16:59:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:06.240 16:59:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.240 16:59:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:06.240 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:06.240 16:59:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.240 16:59:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:06.240 16:59:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:06.240 16:59:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:06.240 16:59:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.240 16:59:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.240 16:59:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.240 16:59:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:06.240 16:59:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.240 16:59:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.240 16:59:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:06.240 16:59:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.240 16:59:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.240 16:59:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:06.240 16:59:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:06.240 16:59:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.240 16:59:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.240 16:59:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.240 16:59:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.240 16:59:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:06.240 16:59:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.240 16:59:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.240 16:59:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.240 16:59:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:06.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:12:06.240 00:12:06.240 --- 10.0.0.2 ping statistics --- 00:12:06.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.240 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:12:06.240 16:59:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:12:06.240 00:12:06.240 --- 10.0.0.1 ping statistics --- 00:12:06.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.240 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:12:06.240 16:59:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.240 16:59:21 -- nvmf/common.sh@411 -- # return 0 00:12:06.240 16:59:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:06.240 16:59:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.240 16:59:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:06.240 16:59:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.240 16:59:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:06.240 16:59:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:06.240 16:59:21 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:06.240 16:59:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:06.240 16:59:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:06.240 16:59:21 -- common/autotest_common.sh@10 -- # set +x 00:12:06.240 16:59:21 -- nvmf/common.sh@470 -- # nvmfpid=1662238 00:12:06.240 16:59:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:06.240 16:59:21 -- nvmf/common.sh@471 -- # waitforlisten 1662238 00:12:06.240 16:59:21 -- common/autotest_common.sh@817 -- # '[' -z 1662238 ']' 00:12:06.240 16:59:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.240 16:59:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:06.240 16:59:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.240 16:59:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:06.240 16:59:21 -- common/autotest_common.sh@10 -- # set +x 00:12:06.240 [2024-04-18 16:59:21.771229] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:12:06.240 [2024-04-18 16:59:21.771323] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.240 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.240 [2024-04-18 16:59:21.840587] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.498 [2024-04-18 16:59:21.959326] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.498 [2024-04-18 16:59:21.959395] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.498 [2024-04-18 16:59:21.959411] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.498 [2024-04-18 16:59:21.959424] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.498 [2024-04-18 16:59:21.959435] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.498 [2024-04-18 16:59:21.959462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.065 16:59:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:07.065 16:59:22 -- common/autotest_common.sh@850 -- # return 0 00:12:07.065 16:59:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:07.065 16:59:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:07.065 16:59:22 -- common/autotest_common.sh@10 -- # set +x 00:12:07.065 16:59:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.065 16:59:22 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.065 16:59:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.065 16:59:22 -- common/autotest_common.sh@10 -- # set +x 00:12:07.065 [2024-04-18 16:59:22.759273] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.065 16:59:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.065 16:59:22 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:07.065 16:59:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.065 16:59:22 -- common/autotest_common.sh@10 -- # set +x 00:12:07.324 Malloc0 00:12:07.324 16:59:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.324 16:59:22 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:07.324 16:59:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.324 16:59:22 -- common/autotest_common.sh@10 -- # set +x 00:12:07.324 16:59:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.324 16:59:22 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:07.324 16:59:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.324 16:59:22 -- common/autotest_common.sh@10 -- # set +x 00:12:07.324 16:59:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.324 16:59:22 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.324 16:59:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.324 16:59:22 -- common/autotest_common.sh@10 -- # set +x 00:12:07.324 [2024-04-18 16:59:22.824189] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.324 16:59:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.324 16:59:22 -- target/queue_depth.sh@30 -- # bdevperf_pid=1662389 00:12:07.324 16:59:22 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:07.324 16:59:22 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:07.324 16:59:22 -- target/queue_depth.sh@33 -- # waitforlisten 1662389 /var/tmp/bdevperf.sock 00:12:07.324 16:59:22 -- common/autotest_common.sh@817 -- # '[' -z 1662389 ']' 00:12:07.324 16:59:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:07.325 16:59:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:07.325 16:59:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:07.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:07.325 16:59:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:07.325 16:59:22 -- common/autotest_common.sh@10 -- # set +x 00:12:07.325 [2024-04-18 16:59:22.868972] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:12:07.325 [2024-04-18 16:59:22.869031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662389 ] 00:12:07.325 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.325 [2024-04-18 16:59:22.929114] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.583 [2024-04-18 16:59:23.048533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.583 16:59:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:07.583 16:59:23 -- common/autotest_common.sh@850 -- # return 0 00:12:07.583 16:59:23 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:07.583 16:59:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.583 16:59:23 -- common/autotest_common.sh@10 -- # set +x 00:12:07.841 NVMe0n1 00:12:07.841 16:59:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.841 16:59:23 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:07.841 Running I/O for 10 seconds... 00:12:20.121 00:12:20.121 Latency(us) 00:12:20.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.121 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:20.121 Verification LBA range: start 0x0 length 0x4000 00:12:20.121 NVMe0n1 : 10.09 8212.92 32.08 0.00 0.00 124178.36 24175.50 78837.38 00:12:20.121 =================================================================================================================== 00:12:20.121 Total : 8212.92 32.08 0.00 0.00 124178.36 24175.50 78837.38 00:12:20.121 0 00:12:20.121 16:59:33 -- target/queue_depth.sh@39 -- # killprocess 1662389 00:12:20.121 16:59:33 -- common/autotest_common.sh@936 -- # '[' -z 1662389 ']' 00:12:20.121 16:59:33 -- common/autotest_common.sh@940 -- # kill -0 1662389 00:12:20.121 16:59:33 -- common/autotest_common.sh@941 -- # uname 00:12:20.121 16:59:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:20.121 16:59:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1662389 00:12:20.121 16:59:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:20.121 16:59:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:20.121 16:59:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1662389' 00:12:20.121 killing process with pid 1662389 00:12:20.121 16:59:33 -- common/autotest_common.sh@955 -- # kill 1662389 00:12:20.121 Received shutdown signal, test time was about 10.000000 seconds 00:12:20.121 00:12:20.121 Latency(us) 00:12:20.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.121 =================================================================================================================== 00:12:20.121 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:20.121 16:59:33 -- common/autotest_common.sh@960 -- # wait 1662389 00:12:20.121 16:59:33 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:20.121 16:59:33 -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:20.121 16:59:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:20.121 16:59:33 -- nvmf/common.sh@117 -- # sync 00:12:20.121 16:59:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:20.121 16:59:33 -- nvmf/common.sh@120 -- # set +e 00:12:20.121 16:59:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.121 16:59:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:20.121 rmmod nvme_tcp 00:12:20.121 rmmod nvme_fabrics 00:12:20.121 rmmod nvme_keyring 00:12:20.121 16:59:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:20.121 16:59:33 -- nvmf/common.sh@124 -- # set -e 00:12:20.121 16:59:33 -- nvmf/common.sh@125 -- # return 0 00:12:20.121 16:59:33 -- nvmf/common.sh@478 -- # '[' -n 1662238 ']' 00:12:20.121 16:59:33 -- nvmf/common.sh@479 -- # killprocess 1662238 00:12:20.121 16:59:33 -- common/autotest_common.sh@936 -- # '[' -z 1662238 ']' 00:12:20.121 16:59:33 -- common/autotest_common.sh@940 -- # kill -0 1662238 00:12:20.121 16:59:33 -- common/autotest_common.sh@941 -- # uname 00:12:20.121 16:59:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:20.121 16:59:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1662238 00:12:20.121 16:59:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:20.121 16:59:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:20.121 16:59:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1662238' 00:12:20.121 killing process with pid 1662238 00:12:20.121 16:59:34 -- common/autotest_common.sh@955 -- # kill 1662238 00:12:20.121 16:59:34 -- common/autotest_common.sh@960 -- # wait 1662238 00:12:20.121 16:59:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:20.121 16:59:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:20.121 16:59:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:20.122 16:59:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:20.122 16:59:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:20.122 16:59:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.122 16:59:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.122 16:59:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.690 16:59:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:20.690 00:12:20.690 real 0m16.796s 00:12:20.690 user 0m23.598s 00:12:20.690 sys 0m3.107s 00:12:20.690 16:59:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:20.690 16:59:36 -- common/autotest_common.sh@10 -- # set +x 00:12:20.690 ************************************ 00:12:20.690 END TEST nvmf_queue_depth 00:12:20.690 ************************************ 00:12:20.949 16:59:36 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:20.949 16:59:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:20.949 16:59:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:20.949 16:59:36 -- common/autotest_common.sh@10 -- # set +x 00:12:20.949 ************************************ 00:12:20.949 START TEST nvmf_multipath 00:12:20.949 ************************************ 00:12:20.949 16:59:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:20.949 * Looking for test storage... 00:12:20.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.949 16:59:36 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.949 16:59:36 -- nvmf/common.sh@7 -- # uname -s 00:12:20.949 16:59:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.949 16:59:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.949 16:59:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.949 16:59:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.949 16:59:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.949 16:59:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.949 16:59:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.949 16:59:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.949 16:59:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.949 16:59:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.949 16:59:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.949 16:59:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.949 16:59:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.949 16:59:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.949 16:59:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.949 16:59:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.949 16:59:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.949 16:59:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.949 16:59:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.949 16:59:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.949 16:59:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.949 16:59:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.949 16:59:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.949 16:59:36 -- paths/export.sh@5 -- # export PATH 00:12:20.949 16:59:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.949 16:59:36 -- nvmf/common.sh@47 -- # : 0 00:12:20.949 16:59:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.949 16:59:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.949 16:59:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.949 16:59:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.949 16:59:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.949 16:59:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.949 16:59:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.949 16:59:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.949 16:59:36 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:20.949 16:59:36 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:20.949 16:59:36 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:20.949 16:59:36 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:20.949 16:59:36 -- target/multipath.sh@43 -- # nvmftestinit 00:12:20.949 16:59:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:20.949 16:59:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.949 16:59:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:20.949 16:59:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:20.949 16:59:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:20.949 16:59:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.950 16:59:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.950 16:59:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.950 16:59:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:20.950 16:59:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:20.950 16:59:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:20.950 16:59:36 -- common/autotest_common.sh@10 -- # set +x 00:12:23.481 16:59:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:23.481 16:59:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:23.481 16:59:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:23.481 16:59:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:23.481 16:59:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:23.481 16:59:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:23.481 16:59:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:23.481 16:59:38 -- nvmf/common.sh@295 -- # net_devs=() 00:12:23.481 16:59:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:23.481 16:59:38 -- nvmf/common.sh@296 -- # e810=() 00:12:23.481 16:59:38 -- nvmf/common.sh@296 -- # local -ga e810 00:12:23.481 16:59:38 -- nvmf/common.sh@297 -- # x722=() 00:12:23.481 16:59:38 -- nvmf/common.sh@297 -- # local -ga x722 00:12:23.481 16:59:38 -- nvmf/common.sh@298 -- # mlx=() 00:12:23.481 16:59:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:23.481 16:59:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.481 16:59:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.481 16:59:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.481 16:59:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.481 16:59:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.481 16:59:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.481 16:59:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.481 16:59:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.481 16:59:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.481 16:59:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.481 16:59:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.481 16:59:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:23.481 16:59:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:23.481 16:59:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:23.481 16:59:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.481 16:59:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:23.481 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:23.481 16:59:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.481 16:59:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:23.481 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:23.481 16:59:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:23.481 16:59:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.481 16:59:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.481 16:59:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:23.481 16:59:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.481 16:59:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:23.481 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:23.481 16:59:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.481 16:59:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.481 16:59:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.481 16:59:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:23.481 16:59:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.481 16:59:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:23.481 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:23.481 16:59:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.481 16:59:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:23.481 16:59:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:23.481 16:59:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:23.481 16:59:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:23.481 16:59:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.481 16:59:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.482 16:59:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.482 16:59:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:23.482 16:59:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.482 16:59:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.482 16:59:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:23.482 16:59:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.482 16:59:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.482 16:59:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:23.482 16:59:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:23.482 16:59:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.482 16:59:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.482 16:59:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.482 16:59:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.482 16:59:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:23.482 16:59:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.482 16:59:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.482 16:59:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.482 16:59:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:23.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:12:23.482 00:12:23.482 --- 10.0.0.2 ping statistics --- 00:12:23.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.482 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:12:23.482 16:59:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:12:23.482 00:12:23.482 --- 10.0.0.1 ping statistics --- 00:12:23.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.482 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:12:23.482 16:59:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.482 16:59:38 -- nvmf/common.sh@411 -- # return 0 00:12:23.482 16:59:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:23.482 16:59:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.482 16:59:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:23.482 16:59:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:23.482 16:59:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.482 16:59:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:23.482 16:59:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:23.482 16:59:38 -- target/multipath.sh@45 -- # '[' -z ']' 00:12:23.482 16:59:38 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:23.482 only one NIC for nvmf test 00:12:23.482 16:59:38 -- target/multipath.sh@47 -- # nvmftestfini 00:12:23.482 16:59:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:23.482 16:59:38 -- nvmf/common.sh@117 -- # sync 00:12:23.482 16:59:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:23.482 16:59:38 -- nvmf/common.sh@120 -- # set +e 00:12:23.482 16:59:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.482 16:59:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:23.482 rmmod nvme_tcp 00:12:23.482 rmmod nvme_fabrics 00:12:23.482 rmmod nvme_keyring 00:12:23.482 16:59:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.482 16:59:38 -- nvmf/common.sh@124 -- # set -e 00:12:23.482 16:59:38 -- nvmf/common.sh@125 -- # return 0 00:12:23.482 16:59:38 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:23.482 16:59:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:23.482 16:59:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:23.482 16:59:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:23.482 16:59:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.482 16:59:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:23.482 16:59:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.482 16:59:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.482 16:59:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.392 16:59:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:25.392 16:59:40 -- target/multipath.sh@48 -- # exit 0 00:12:25.392 16:59:40 -- target/multipath.sh@1 -- # nvmftestfini 00:12:25.392 16:59:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:25.392 16:59:40 -- nvmf/common.sh@117 -- # sync 00:12:25.392 16:59:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:25.392 16:59:40 -- nvmf/common.sh@120 -- # set +e 00:12:25.392 16:59:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:25.392 16:59:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:25.392 16:59:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:25.392 16:59:40 -- nvmf/common.sh@124 -- # set -e 00:12:25.392 16:59:40 -- nvmf/common.sh@125 -- # return 0 00:12:25.392 16:59:40 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:25.392 16:59:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:25.392 16:59:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:25.392 16:59:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:25.392 16:59:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.392 16:59:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:25.392 16:59:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.392 16:59:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.392 16:59:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.392 16:59:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:25.392 00:12:25.392 real 0m4.358s 00:12:25.392 user 0m0.841s 00:12:25.392 sys 0m1.519s 00:12:25.392 16:59:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:25.392 16:59:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.392 ************************************ 00:12:25.392 END TEST nvmf_multipath 00:12:25.392 ************************************ 00:12:25.392 16:59:40 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:25.392 16:59:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:25.392 16:59:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:25.392 16:59:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.392 ************************************ 00:12:25.392 START TEST nvmf_zcopy 00:12:25.392 ************************************ 00:12:25.392 16:59:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:25.392 * Looking for test storage... 00:12:25.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.392 16:59:41 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.392 16:59:41 -- nvmf/common.sh@7 -- # uname -s 00:12:25.392 16:59:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.392 16:59:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.392 16:59:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.392 16:59:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.392 16:59:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.392 16:59:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.392 16:59:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.392 16:59:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.392 16:59:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.392 16:59:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.392 16:59:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:25.392 16:59:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:25.392 16:59:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.392 16:59:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.392 16:59:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.392 16:59:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.392 16:59:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.392 16:59:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.392 16:59:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.392 16:59:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.392 16:59:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.392 16:59:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.392 16:59:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.392 16:59:41 -- paths/export.sh@5 -- # export PATH 00:12:25.392 16:59:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.392 16:59:41 -- nvmf/common.sh@47 -- # : 0 00:12:25.392 16:59:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.392 16:59:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.392 16:59:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.392 16:59:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.392 16:59:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.392 16:59:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.392 16:59:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.392 16:59:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.392 16:59:41 -- target/zcopy.sh@12 -- # nvmftestinit 00:12:25.392 16:59:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:25.392 16:59:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.392 16:59:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:25.392 16:59:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:25.392 16:59:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:25.392 16:59:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.392 16:59:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.392 16:59:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.392 16:59:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:25.392 16:59:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:25.392 16:59:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.392 16:59:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.924 16:59:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:27.924 16:59:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.924 16:59:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.924 16:59:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.924 16:59:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.924 16:59:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.924 16:59:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.924 16:59:43 -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.924 16:59:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.924 16:59:43 -- nvmf/common.sh@296 -- # e810=() 00:12:27.924 16:59:43 -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.924 16:59:43 -- nvmf/common.sh@297 -- # x722=() 00:12:27.924 16:59:43 -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.924 16:59:43 -- nvmf/common.sh@298 -- # mlx=() 00:12:27.924 16:59:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.924 16:59:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.924 16:59:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.924 16:59:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.924 16:59:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.924 16:59:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.924 16:59:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.924 16:59:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.924 16:59:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.924 16:59:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.924 16:59:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.924 16:59:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.924 16:59:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.924 16:59:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:27.924 16:59:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.924 16:59:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.924 16:59:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:27.924 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:27.924 16:59:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.924 16:59:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:27.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:27.924 16:59:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.924 16:59:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.924 16:59:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.924 16:59:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:27.924 16:59:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.924 16:59:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:27.924 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:27.924 16:59:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.924 16:59:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.924 16:59:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.924 16:59:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:27.924 16:59:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.924 16:59:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:27.924 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:27.924 16:59:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.924 16:59:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:27.924 16:59:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:27.924 16:59:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:27.924 16:59:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:27.924 16:59:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.924 16:59:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.924 16:59:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.924 16:59:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:27.924 16:59:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.924 16:59:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.924 16:59:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:27.924 16:59:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.924 16:59:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.924 16:59:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:27.924 16:59:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:27.924 16:59:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.924 16:59:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.924 16:59:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.924 16:59:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.924 16:59:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:27.924 16:59:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.924 16:59:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.924 16:59:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.925 16:59:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:27.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:12:27.925 00:12:27.925 --- 10.0.0.2 ping statistics --- 00:12:27.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.925 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:12:27.925 16:59:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:12:27.925 00:12:27.925 --- 10.0.0.1 ping statistics --- 00:12:27.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.925 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:12:27.925 16:59:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.925 16:59:43 -- nvmf/common.sh@411 -- # return 0 00:12:27.925 16:59:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:27.925 16:59:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.925 16:59:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:27.925 16:59:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:27.925 16:59:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.925 16:59:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:27.925 16:59:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:27.925 16:59:43 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:27.925 16:59:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:27.925 16:59:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:27.925 16:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:27.925 16:59:43 -- nvmf/common.sh@470 -- # nvmfpid=1667582 00:12:27.925 16:59:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:27.925 16:59:43 -- nvmf/common.sh@471 -- # waitforlisten 1667582 00:12:27.925 16:59:43 -- common/autotest_common.sh@817 -- # '[' -z 1667582 ']' 00:12:27.925 16:59:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.925 16:59:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:27.925 16:59:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.925 16:59:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:27.925 16:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:27.925 [2024-04-18 16:59:43.368090] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:12:27.925 [2024-04-18 16:59:43.368177] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.925 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.925 [2024-04-18 16:59:43.433127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.925 [2024-04-18 16:59:43.538520] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.925 [2024-04-18 16:59:43.538586] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.925 [2024-04-18 16:59:43.538616] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.925 [2024-04-18 16:59:43.538628] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.925 [2024-04-18 16:59:43.538638] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.925 [2024-04-18 16:59:43.538665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.186 16:59:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:28.186 16:59:43 -- common/autotest_common.sh@850 -- # return 0 00:12:28.186 16:59:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:28.186 16:59:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:28.186 16:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 16:59:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.186 16:59:43 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:28.186 16:59:43 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:28.186 16:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.186 16:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 [2024-04-18 16:59:43.691048] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.186 16:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.186 16:59:43 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:28.186 16:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.186 16:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 16:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.186 16:59:43 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.186 16:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.186 16:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 [2024-04-18 16:59:43.707256] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.186 16:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.186 16:59:43 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:28.186 16:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.186 16:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 16:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.186 16:59:43 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:28.186 16:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.186 16:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 malloc0 00:12:28.186 16:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.186 16:59:43 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:28.186 16:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.186 16:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:28.186 16:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.186 16:59:43 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:28.186 16:59:43 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:28.186 16:59:43 -- nvmf/common.sh@521 -- # config=() 00:12:28.186 16:59:43 -- nvmf/common.sh@521 -- # local subsystem config 00:12:28.186 16:59:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:28.186 16:59:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:28.186 { 00:12:28.186 "params": { 00:12:28.186 "name": "Nvme$subsystem", 00:12:28.186 "trtype": "$TEST_TRANSPORT", 00:12:28.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:28.186 "adrfam": "ipv4", 00:12:28.186 "trsvcid": "$NVMF_PORT", 00:12:28.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:28.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:28.186 "hdgst": ${hdgst:-false}, 00:12:28.186 "ddgst": ${ddgst:-false} 00:12:28.186 }, 00:12:28.186 "method": "bdev_nvme_attach_controller" 00:12:28.186 } 00:12:28.186 EOF 00:12:28.186 )") 00:12:28.186 16:59:43 -- nvmf/common.sh@543 -- # cat 00:12:28.186 16:59:43 -- nvmf/common.sh@545 -- # jq . 00:12:28.186 16:59:43 -- nvmf/common.sh@546 -- # IFS=, 00:12:28.186 16:59:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:28.186 "params": { 00:12:28.186 "name": "Nvme1", 00:12:28.186 "trtype": "tcp", 00:12:28.186 "traddr": "10.0.0.2", 00:12:28.186 "adrfam": "ipv4", 00:12:28.186 "trsvcid": "4420", 00:12:28.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:28.186 "hdgst": false, 00:12:28.186 "ddgst": false 00:12:28.186 }, 00:12:28.186 "method": "bdev_nvme_attach_controller" 00:12:28.186 }' 00:12:28.186 [2024-04-18 16:59:43.787326] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:12:28.186 [2024-04-18 16:59:43.787435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667721 ] 00:12:28.186 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.186 [2024-04-18 16:59:43.846335] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.446 [2024-04-18 16:59:43.959705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.446 Running I/O for 10 seconds... 00:12:40.673 00:12:40.673 Latency(us) 00:12:40.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.673 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:40.673 Verification LBA range: start 0x0 length 0x1000 00:12:40.673 Nvme1n1 : 10.02 5251.18 41.02 0.00 0.00 24311.31 412.63 34564.17 00:12:40.673 =================================================================================================================== 00:12:40.673 Total : 5251.18 41.02 0.00 0.00 24311.31 412.63 34564.17 00:12:40.673 16:59:54 -- target/zcopy.sh@39 -- # perfpid=1668917 00:12:40.673 16:59:54 -- target/zcopy.sh@41 -- # xtrace_disable 00:12:40.673 16:59:54 -- common/autotest_common.sh@10 -- # set +x 00:12:40.673 16:59:54 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:40.673 16:59:54 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:40.673 16:59:54 -- nvmf/common.sh@521 -- # config=() 00:12:40.673 16:59:54 -- nvmf/common.sh@521 -- # local subsystem config 00:12:40.673 16:59:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:40.673 16:59:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:40.673 { 00:12:40.673 "params": { 00:12:40.673 "name": "Nvme$subsystem", 00:12:40.673 "trtype": "$TEST_TRANSPORT", 00:12:40.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:40.673 "adrfam": "ipv4", 00:12:40.673 "trsvcid": "$NVMF_PORT", 00:12:40.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:40.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:40.673 "hdgst": ${hdgst:-false}, 00:12:40.673 "ddgst": ${ddgst:-false} 00:12:40.673 }, 00:12:40.673 "method": "bdev_nvme_attach_controller" 00:12:40.673 } 00:12:40.673 EOF 00:12:40.673 )") 00:12:40.673 16:59:54 -- nvmf/common.sh@543 -- # cat 00:12:40.673 [2024-04-18 16:59:54.468467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.468511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 16:59:54 -- nvmf/common.sh@545 -- # jq . 00:12:40.673 16:59:54 -- nvmf/common.sh@546 -- # IFS=, 00:12:40.673 16:59:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:40.673 "params": { 00:12:40.673 "name": "Nvme1", 00:12:40.673 "trtype": "tcp", 00:12:40.673 "traddr": "10.0.0.2", 00:12:40.673 "adrfam": "ipv4", 00:12:40.673 "trsvcid": "4420", 00:12:40.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:40.673 "hdgst": false, 00:12:40.673 "ddgst": false 00:12:40.673 }, 00:12:40.673 "method": "bdev_nvme_attach_controller" 00:12:40.673 }' 00:12:40.673 [2024-04-18 16:59:54.476412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.476436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.484435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.484458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.492450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.492471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.500460] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.500480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.506343] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:12:40.673 [2024-04-18 16:59:54.506451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668917 ] 00:12:40.673 [2024-04-18 16:59:54.508497] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.508519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.516503] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.516523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.524523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.524543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.532542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.532563] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.673 [2024-04-18 16:59:54.540568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.540589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.548591] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.548618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.556614] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.556635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.564635] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.564656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.566213] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.673 [2024-04-18 16:59:54.572704] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.572749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.580745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.580776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.588718] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.588753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.596752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.596773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.604752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.604771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.612807] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.612827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.620805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.620824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.628852] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.628882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.636880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.636912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.644869] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.644890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.652891] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.652910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.660918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.660938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.668938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.668957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.676960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.676980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.680546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.673 [2024-04-18 16:59:54.684978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.684997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.693001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.693026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.701063] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.701099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.709082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.709115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.717101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.717134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.673 [2024-04-18 16:59:54.725126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.673 [2024-04-18 16:59:54.725163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.733152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.733184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.741173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.741207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.749157] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.749179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.757209] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.757242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.765231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.765264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.773249] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.773284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.781254] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.781275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.789260] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.789280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.797284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.797303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.805321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.805345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.813334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.813357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.821376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.821423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.829408] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.829449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.837442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.837465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.845464] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.845488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.853479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.853502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.861490] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.861510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.869522] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.869546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 Running I/O for 5 seconds... 00:12:40.674 [2024-04-18 16:59:54.877523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.877544] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.890275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.890303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.900954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.900986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.912536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.912563] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.924020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.924047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.935272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.935299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.948537] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.948564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.959671] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.959701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.971766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.971796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.983045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.983070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:54.994928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:54.994958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.006572] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.006599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.017995] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.018025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.029879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.029909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.041304] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.041334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.052723] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.052752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.064335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.064361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.075555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.075581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.087005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.087035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.098352] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.098378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.109805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.109835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.120911] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.120937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.133466] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.133491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.143101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.143127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.154573] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.154599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.165634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.165660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.176736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.176762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.190167] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.190198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.201248] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.201278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.212606] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.212632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.225439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.225465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.235783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.235813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.247399] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.247426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.258914] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.258944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.270606] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.270632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.282093] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.282123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.293351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.293378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.304113] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.304143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.315574] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.674 [2024-04-18 16:59:55.315603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.674 [2024-04-18 16:59:55.326640] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.326670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.337659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.337685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.348266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.348292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.359280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.359306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.371254] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.371279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.382437] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.382462] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.393959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.393989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.405330] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.405355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.417942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.417972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.428135] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.428161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.439716] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.439742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.450431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.450457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.461475] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.461505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.474738] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.474772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.485192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.485222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.496400] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.496426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.509459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.509486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.519287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.519313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.530982] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.531011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.542079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.542109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.553279] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.553305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.566000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.566026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.578101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.578127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.587336] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.587362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.598755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.598781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.609432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.609458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.620538] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.620564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.633454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.633480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.644289] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.644314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.655639] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.655670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.666868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.666898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.678057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.678083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.691219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.691258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.701757] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.701786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.712970] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.713000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.726283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.726313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.737082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.737112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.748219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.748249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.759279] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.759305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.771183] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.771212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.781698] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.781723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.793049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.793078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.804668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.804695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.817856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.817886] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.828179] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.828209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.840145] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.840172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.850700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.850736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.861604] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.861631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.872486] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.872513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.885459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.885486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.895693] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.895719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.906242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.906279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.916551] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.916578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.927408] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.927434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.940122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.940149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.950516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.950542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.961366] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.961404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.972370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.675 [2024-04-18 16:59:55.972407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.675 [2024-04-18 16:59:55.983432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:55.983459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:55.995817] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:55.995844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.007578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.007605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.017203] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.017230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.028161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.028188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.040694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.040720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.050741] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.050768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.061317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.061344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.074160] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.074188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.084230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.084258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.094923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.094950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.105723] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.105750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.116179] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.116213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.126942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.126968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.137453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.137480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.148273] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.148300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.158967] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.158994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.171548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.171574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.183615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.183642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.193201] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.193228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.203953] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.203980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.216454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.216498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.228020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.228046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.237702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.237728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.248734] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.248760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.259752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.259779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.270796] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.270822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.283714] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.283740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.293904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.293931] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.304597] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.304623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.315173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.315200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.325994] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.326020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.338771] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.338798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.349135] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.349165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.360791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.360817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.676 [2024-04-18 16:59:56.371631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.676 [2024-04-18 16:59:56.371661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.936 [2024-04-18 16:59:56.383084] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.936 [2024-04-18 16:59:56.383111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.936 [2024-04-18 16:59:56.394388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.936 [2024-04-18 16:59:56.394414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.936 [2024-04-18 16:59:56.405897] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.936 [2024-04-18 16:59:56.405937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.936 [2024-04-18 16:59:56.416921] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.936 [2024-04-18 16:59:56.416946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.936 [2024-04-18 16:59:56.429924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.936 [2024-04-18 16:59:56.429950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.440143] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.440172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.451606] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.451632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.464875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.464900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.475654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.475683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.487341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.487391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.498955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.498985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.512085] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.512115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.522596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.522623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.533958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.533987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.545338] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.545364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.556867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.556892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.568317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.568343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.580141] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.580170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.591308] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.591337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.604590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.604616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.615297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.615327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.626584] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.626610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.937 [2024-04-18 16:59:56.640258] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.937 [2024-04-18 16:59:56.640288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.195 [2024-04-18 16:59:56.651172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.195 [2024-04-18 16:59:56.651202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.195 [2024-04-18 16:59:56.662841] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.195 [2024-04-18 16:59:56.662871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.195 [2024-04-18 16:59:56.674205] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.195 [2024-04-18 16:59:56.674234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.195 [2024-04-18 16:59:56.685540] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.195 [2024-04-18 16:59:56.685566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.195 [2024-04-18 16:59:56.696693] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.195 [2024-04-18 16:59:56.696719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.195 [2024-04-18 16:59:56.707866] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.195 [2024-04-18 16:59:56.707892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.195 [2024-04-18 16:59:56.718910] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.195 [2024-04-18 16:59:56.718939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.195 [2024-04-18 16:59:56.730020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.195 [2024-04-18 16:59:56.730050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.195 [2024-04-18 16:59:56.741590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.195 [2024-04-18 16:59:56.741616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.195 [2024-04-18 16:59:56.753267] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.195 [2024-04-18 16:59:56.753296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.195 [2024-04-18 16:59:56.764709] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.195 [2024-04-18 16:59:56.764735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.195 [2024-04-18 16:59:56.776601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.195 [2024-04-18 16:59:56.776627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.196 [2024-04-18 16:59:56.789691] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.196 [2024-04-18 16:59:56.789717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.196 [2024-04-18 16:59:56.800168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.196 [2024-04-18 16:59:56.800197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.196 [2024-04-18 16:59:56.812345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.196 [2024-04-18 16:59:56.812374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.196 [2024-04-18 16:59:56.823570] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.196 [2024-04-18 16:59:56.823596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.196 [2024-04-18 16:59:56.834906] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.196 [2024-04-18 16:59:56.834935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.196 [2024-04-18 16:59:56.846311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.196 [2024-04-18 16:59:56.846352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.196 [2024-04-18 16:59:56.857469] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.196 [2024-04-18 16:59:56.857495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.196 [2024-04-18 16:59:56.868132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.196 [2024-04-18 16:59:56.868163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.196 [2024-04-18 16:59:56.879694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.196 [2024-04-18 16:59:56.879720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.196 [2024-04-18 16:59:56.891054] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.196 [2024-04-18 16:59:56.891084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.454 [2024-04-18 16:59:56.902143] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.454 [2024-04-18 16:59:56.902175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.454 [2024-04-18 16:59:56.913327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.454 [2024-04-18 16:59:56.913354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.454 [2024-04-18 16:59:56.926495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.454 [2024-04-18 16:59:56.926521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.454 [2024-04-18 16:59:56.937133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.454 [2024-04-18 16:59:56.937163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.454 [2024-04-18 16:59:56.948232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.454 [2024-04-18 16:59:56.948258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.454 [2024-04-18 16:59:56.961579] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:56.961609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:56.972431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:56.972457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:56.983415] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:56.983441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:56.994566] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:56.994591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.005571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.005598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.016310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.016336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.027766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.027793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.039098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.039124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.050287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.050317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.063942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.063972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.074848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.074878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.086119] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.086149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.097621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.097647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.109450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.109476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.120646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.120674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.131900] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.131927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.142592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.142618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.455 [2024-04-18 16:59:57.153904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.455 [2024-04-18 16:59:57.153930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.713 [2024-04-18 16:59:57.165110] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.713 [2024-04-18 16:59:57.165138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.713 [2024-04-18 16:59:57.177973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.713 [2024-04-18 16:59:57.178008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.713 [2024-04-18 16:59:57.188038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.713 [2024-04-18 16:59:57.188089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.713 [2024-04-18 16:59:57.199977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.713 [2024-04-18 16:59:57.200003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.713 [2024-04-18 16:59:57.211190] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.713 [2024-04-18 16:59:57.211216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.713 [2024-04-18 16:59:57.224059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.713 [2024-04-18 16:59:57.224099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.713 [2024-04-18 16:59:57.236103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.713 [2024-04-18 16:59:57.236130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.713 [2024-04-18 16:59:57.245903] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.245930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.257585] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.257611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.268473] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.268499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.279764] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.279790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.292522] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.292548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.302916] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.302942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.313924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.313950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.324708] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.324734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.336129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.336156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.346953] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.346984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.357433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.357461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.368607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.368634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.379966] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.379993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.393192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.393219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.403526] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.403574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.714 [2024-04-18 16:59:57.414569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.714 [2024-04-18 16:59:57.414595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.427334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.427362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.437320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.437346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.447997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.448023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.460639] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.460665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.470681] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.470707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.481885] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.481912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.492601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.492628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.503413] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.503439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.515066] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.515091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.526816] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.526846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.538297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.538327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.549772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.549802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.561414] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.561440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.572992] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.573021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.586262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.586292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.596794] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.596820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.608496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.608522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.619484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.619517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.631319] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.631345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.642622] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.642650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.653939] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.653969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.665301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.665331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.972 [2024-04-18 16:59:57.677198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.972 [2024-04-18 16:59:57.677228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.688788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.688815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.700388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.700431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.711749] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.711778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.724879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.724909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.735433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.735474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.746852] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.746882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.758034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.758064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.769488] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.769514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.782586] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.782612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.793208] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.793238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.804968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.804998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.816123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.816152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.827675] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.827701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.838563] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.838596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.849835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.849864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.861183] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.861212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.872161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.872190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.885639] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.885666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.895890] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.895920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.907211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.907240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.920439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.920464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.233 [2024-04-18 16:59:57.930332] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.233 [2024-04-18 16:59:57.930358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.493 [2024-04-18 16:59:57.942438] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.493 [2024-04-18 16:59:57.942465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.493 [2024-04-18 16:59:57.953922] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.493 [2024-04-18 16:59:57.953951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.493 [2024-04-18 16:59:57.965435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.493 [2024-04-18 16:59:57.965461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.493 [2024-04-18 16:59:57.976842] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.493 [2024-04-18 16:59:57.976872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.493 [2024-04-18 16:59:57.988289] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:57.988318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.000040] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.000070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.013137] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.013167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.024022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.024051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.035888] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.035929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.047718] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.047748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.060881] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.060911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.072083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.072113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.083837] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.083868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.095345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.095370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.108448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.108474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.118396] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.118429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.130442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.130469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.142002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.142029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.153374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.153436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.164545] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.164572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.176151] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.176177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.187548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.187575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.494 [2024-04-18 16:59:58.198714] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.494 [2024-04-18 16:59:58.198740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.754 [2024-04-18 16:59:58.209929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.754 [2024-04-18 16:59:58.209959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.754 [2024-04-18 16:59:58.222755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.754 [2024-04-18 16:59:58.222798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.754 [2024-04-18 16:59:58.232037] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.754 [2024-04-18 16:59:58.232063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.754 [2024-04-18 16:59:58.243718] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.754 [2024-04-18 16:59:58.243745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.754 [2024-04-18 16:59:58.255258] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.754 [2024-04-18 16:59:58.255291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.754 [2024-04-18 16:59:58.266631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.754 [2024-04-18 16:59:58.266661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.754 [2024-04-18 16:59:58.277963] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.754 [2024-04-18 16:59:58.277993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.754 [2024-04-18 16:59:58.289292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.754 [2024-04-18 16:59:58.289317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.754 [2024-04-18 16:59:58.300690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.754 [2024-04-18 16:59:58.300716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.754 [2024-04-18 16:59:58.311799] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.754 [2024-04-18 16:59:58.311829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.754 [2024-04-18 16:59:58.322961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.754 [2024-04-18 16:59:58.322990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.755 [2024-04-18 16:59:58.334136] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.755 [2024-04-18 16:59:58.334166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.755 [2024-04-18 16:59:58.345456] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.755 [2024-04-18 16:59:58.345482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.755 [2024-04-18 16:59:58.356347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.755 [2024-04-18 16:59:58.356373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.755 [2024-04-18 16:59:58.369523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.755 [2024-04-18 16:59:58.369549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.755 [2024-04-18 16:59:58.380313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.755 [2024-04-18 16:59:58.380343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.755 [2024-04-18 16:59:58.391913] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.755 [2024-04-18 16:59:58.391938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.755 [2024-04-18 16:59:58.405175] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.755 [2024-04-18 16:59:58.405204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.755 [2024-04-18 16:59:58.415773] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.755 [2024-04-18 16:59:58.415803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.755 [2024-04-18 16:59:58.427160] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.755 [2024-04-18 16:59:58.427189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.755 [2024-04-18 16:59:58.438775] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.755 [2024-04-18 16:59:58.438801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.755 [2024-04-18 16:59:58.452233] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.755 [2024-04-18 16:59:58.452262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.463256] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.463287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.474772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.474803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.486316] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.486346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.497765] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.497795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.509231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.509261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.521020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.521050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.532712] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.532738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.543822] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.543848] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.555346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.555376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.566870] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.566900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.578020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.578050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.589599] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.589626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.601213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.601243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.614755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.614781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.625774] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.037 [2024-04-18 16:59:58.625804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.037 [2024-04-18 16:59:58.636812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.038 [2024-04-18 16:59:58.636838] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.038 [2024-04-18 16:59:58.650106] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.038 [2024-04-18 16:59:58.650136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.038 [2024-04-18 16:59:58.660292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.038 [2024-04-18 16:59:58.660322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.038 [2024-04-18 16:59:58.671938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.038 [2024-04-18 16:59:58.671967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.038 [2024-04-18 16:59:58.683728] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.038 [2024-04-18 16:59:58.683753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.038 [2024-04-18 16:59:58.697239] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.038 [2024-04-18 16:59:58.697269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.038 [2024-04-18 16:59:58.708115] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.038 [2024-04-18 16:59:58.708145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.038 [2024-04-18 16:59:58.719271] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.038 [2024-04-18 16:59:58.719302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.732668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.732696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.743628] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.743653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.754762] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.754788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.768060] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.768089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.778800] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.778826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.789393] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.789418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.800305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.800331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.811158] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.811187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.824516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.824543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.835119] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.835149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.846373] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.846419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.857624] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.857650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.869159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.869188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.880173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.880202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.891485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.891511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.903068] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.903097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.914780] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.914806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.926490] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.926523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.937791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.937833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.949368] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.949407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.960453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.960480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.973631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.973658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.984236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.984265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.298 [2024-04-18 16:59:58.995236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.298 [2024-04-18 16:59:58.995262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.006923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.006953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.018566] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.018593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.029507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.029533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.041042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.041071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.052785] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.052815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.066828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.066858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.077791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.077822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.088515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.088542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.099755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.099782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.110960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.111003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.122232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.122262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.133784] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.133814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.145165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.145202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.157427] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.157453] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.169159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.169189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.180895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.180920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.192344] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.192393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.203601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.203628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.215194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.215219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.226611] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.226636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.237857] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.237883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.249231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.249257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.557 [2024-04-18 16:59:59.260541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.557 [2024-04-18 16:59:59.260568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.271693] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.271720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.282884] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.282910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.294848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.294877] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.306753] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.306779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.318202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.318232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.331500] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.331527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.341689] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.341715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.353004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.353034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.364371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.364412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.375754] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.375779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.389230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.389260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.400197] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.400227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.411731] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.411757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.423070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.423100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.434882] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.434911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.445904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.445930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.457378] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.457413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.469174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.469204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.482567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.482593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.498746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.498774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.510047] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.510077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.816 [2024-04-18 16:59:59.521666] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.816 [2024-04-18 16:59:59.521692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.533316] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.533348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.544811] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.544840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.556481] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.556507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.567935] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.567964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.579516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.579542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.591010] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.591050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.602204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.602233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.613310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.613339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.624925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.624955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.636645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.636672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.648249] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.648278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.659368] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.659402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.670557] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.670584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.682066] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.682096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.694045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.694074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.705714] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.705743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.717221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.717247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.728893] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.728923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.740568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.740595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.753864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.753894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.764075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.764105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.076 [2024-04-18 16:59:59.776272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.076 [2024-04-18 16:59:59.776299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.787741] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.787768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.798444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.798470] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.809682] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.809709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.822670] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.822700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.833678] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.833708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.844905] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.844932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.856004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.856031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.866954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.866983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.877998] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.878028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.889197] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.889226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.898444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.898470] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 00:12:44.336 Latency(us) 00:12:44.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.336 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:44.336 Nvme1n1 : 5.01 11300.34 88.28 0.00 0.00 11312.66 4660.34 21845.33 00:12:44.336 =================================================================================================================== 00:12:44.336 Total : 11300.34 88.28 0.00 0.00 11312.66 4660.34 21845.33 00:12:44.336 [2024-04-18 16:59:59.904064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.904092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.912085] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.912113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.920105] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.920130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.928192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.928239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.936200] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.936245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.944225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.944270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.952257] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.952302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.960249] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.960290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.968288] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.968334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.976304] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.976347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.984326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.984371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 16:59:59.992353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 16:59:59.992424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 17:00:00.000377] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 17:00:00.000448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 17:00:00.008442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 17:00:00.008494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 17:00:00.016460] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 17:00:00.016509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 17:00:00.024481] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 17:00:00.024530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 17:00:00.032484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 17:00:00.032531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.336 [2024-04-18 17:00:00.040503] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.336 [2024-04-18 17:00:00.040549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.048508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.048545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.056474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.056496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.064516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.064542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.072528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.072550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.080537] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.080559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.088601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.088639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.096627] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.096679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.104641] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.104699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.112623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.112645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.120645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.120684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.128679] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.128699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.136704] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.136739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.144747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.144786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.152776] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.152820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.160796] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.160831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.168798] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.168822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.176821] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.176845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 [2024-04-18 17:00:00.184838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.595 [2024-04-18 17:00:00.184862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1668917) - No such process 00:12:44.595 17:00:00 -- target/zcopy.sh@49 -- # wait 1668917 00:12:44.595 17:00:00 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.595 17:00:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:44.595 17:00:00 -- common/autotest_common.sh@10 -- # set +x 00:12:44.595 17:00:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:44.595 17:00:00 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:44.595 17:00:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:44.595 17:00:00 -- common/autotest_common.sh@10 -- # set +x 00:12:44.595 delay0 00:12:44.595 17:00:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:44.595 17:00:00 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:44.595 17:00:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:44.595 17:00:00 -- common/autotest_common.sh@10 -- # set +x 00:12:44.595 17:00:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:44.595 17:00:00 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:44.595 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.853 [2024-04-18 17:00:00.306371] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:52.982 Initializing NVMe Controllers 00:12:52.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:52.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:52.982 Initialization complete. Launching workers. 00:12:52.982 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 16233 00:12:52.982 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 16404, failed to submit 94 00:12:52.982 success 16289, unsuccess 115, failed 0 00:12:52.982 17:00:07 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:52.982 17:00:07 -- target/zcopy.sh@60 -- # nvmftestfini 00:12:52.982 17:00:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:52.982 17:00:07 -- nvmf/common.sh@117 -- # sync 00:12:52.982 17:00:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.982 17:00:07 -- nvmf/common.sh@120 -- # set +e 00:12:52.982 17:00:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.982 17:00:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.982 rmmod nvme_tcp 00:12:52.982 rmmod nvme_fabrics 00:12:52.982 rmmod nvme_keyring 00:12:52.982 17:00:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.982 17:00:07 -- nvmf/common.sh@124 -- # set -e 00:12:52.982 17:00:07 -- nvmf/common.sh@125 -- # return 0 00:12:52.982 17:00:07 -- nvmf/common.sh@478 -- # '[' -n 1667582 ']' 00:12:52.982 17:00:07 -- nvmf/common.sh@479 -- # killprocess 1667582 00:12:52.982 17:00:07 -- common/autotest_common.sh@936 -- # '[' -z 1667582 ']' 00:12:52.982 17:00:07 -- common/autotest_common.sh@940 -- # kill -0 1667582 00:12:52.982 17:00:07 -- common/autotest_common.sh@941 -- # uname 00:12:52.982 17:00:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:52.982 17:00:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1667582 00:12:52.982 17:00:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:52.982 17:00:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:52.982 17:00:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1667582' 00:12:52.982 killing process with pid 1667582 00:12:52.982 17:00:07 -- common/autotest_common.sh@955 -- # kill 1667582 00:12:52.982 17:00:07 -- common/autotest_common.sh@960 -- # wait 1667582 00:12:52.982 17:00:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:52.982 17:00:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:52.982 17:00:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:52.982 17:00:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.982 17:00:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.982 17:00:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.982 17:00:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.982 17:00:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.377 17:00:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.377 00:12:54.377 real 0m28.834s 00:12:54.377 user 0m40.886s 00:12:54.377 sys 0m9.643s 00:12:54.377 17:00:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:54.377 17:00:09 -- common/autotest_common.sh@10 -- # set +x 00:12:54.377 ************************************ 00:12:54.377 END TEST nvmf_zcopy 00:12:54.377 ************************************ 00:12:54.377 17:00:09 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:54.378 17:00:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:54.378 17:00:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:54.378 17:00:09 -- common/autotest_common.sh@10 -- # set +x 00:12:54.378 ************************************ 00:12:54.378 START TEST nvmf_nmic 00:12:54.378 ************************************ 00:12:54.378 17:00:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:54.378 * Looking for test storage... 00:12:54.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.378 17:00:09 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.378 17:00:09 -- nvmf/common.sh@7 -- # uname -s 00:12:54.378 17:00:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.378 17:00:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.378 17:00:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.378 17:00:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.378 17:00:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.378 17:00:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.378 17:00:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.378 17:00:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.378 17:00:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.378 17:00:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.378 17:00:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.378 17:00:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.378 17:00:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.378 17:00:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.378 17:00:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.378 17:00:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.378 17:00:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.378 17:00:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.378 17:00:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.378 17:00:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.378 17:00:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.378 17:00:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.378 17:00:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.378 17:00:10 -- paths/export.sh@5 -- # export PATH 00:12:54.378 17:00:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.378 17:00:10 -- nvmf/common.sh@47 -- # : 0 00:12:54.378 17:00:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.378 17:00:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.378 17:00:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.378 17:00:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.378 17:00:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.378 17:00:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.378 17:00:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.378 17:00:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.378 17:00:10 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.378 17:00:10 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.378 17:00:10 -- target/nmic.sh@14 -- # nvmftestinit 00:12:54.378 17:00:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:54.378 17:00:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.378 17:00:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:54.378 17:00:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:54.378 17:00:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:54.378 17:00:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.378 17:00:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.378 17:00:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.378 17:00:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:54.378 17:00:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:54.378 17:00:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.378 17:00:10 -- common/autotest_common.sh@10 -- # set +x 00:12:56.312 17:00:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:56.312 17:00:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:56.312 17:00:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:56.312 17:00:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:56.312 17:00:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:56.312 17:00:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:56.312 17:00:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:56.312 17:00:11 -- nvmf/common.sh@295 -- # net_devs=() 00:12:56.312 17:00:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:56.312 17:00:11 -- nvmf/common.sh@296 -- # e810=() 00:12:56.312 17:00:11 -- nvmf/common.sh@296 -- # local -ga e810 00:12:56.312 17:00:11 -- nvmf/common.sh@297 -- # x722=() 00:12:56.312 17:00:11 -- nvmf/common.sh@297 -- # local -ga x722 00:12:56.312 17:00:11 -- nvmf/common.sh@298 -- # mlx=() 00:12:56.312 17:00:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:56.312 17:00:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.312 17:00:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.312 17:00:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.312 17:00:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.312 17:00:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.312 17:00:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.312 17:00:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.312 17:00:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.312 17:00:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.312 17:00:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.312 17:00:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.312 17:00:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:56.312 17:00:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:56.312 17:00:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:56.312 17:00:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.312 17:00:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:56.312 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:56.312 17:00:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.312 17:00:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:56.312 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:56.312 17:00:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:56.312 17:00:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.312 17:00:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.312 17:00:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:56.312 17:00:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.312 17:00:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:56.312 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:56.312 17:00:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.312 17:00:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.312 17:00:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.312 17:00:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:56.312 17:00:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.312 17:00:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:56.312 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:56.312 17:00:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.312 17:00:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:56.312 17:00:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:56.312 17:00:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:56.312 17:00:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:56.312 17:00:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.312 17:00:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.312 17:00:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.312 17:00:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:56.312 17:00:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.312 17:00:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.312 17:00:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:56.312 17:00:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.312 17:00:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.312 17:00:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:56.312 17:00:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:56.312 17:00:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.312 17:00:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.575 17:00:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.575 17:00:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.575 17:00:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:56.575 17:00:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.575 17:00:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.575 17:00:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.575 17:00:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:56.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:12:56.575 00:12:56.575 --- 10.0.0.2 ping statistics --- 00:12:56.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.575 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:12:56.575 17:00:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:12:56.575 00:12:56.575 --- 10.0.0.1 ping statistics --- 00:12:56.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.575 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:12:56.575 17:00:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.575 17:00:12 -- nvmf/common.sh@411 -- # return 0 00:12:56.575 17:00:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:56.575 17:00:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.575 17:00:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:56.575 17:00:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:56.575 17:00:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.575 17:00:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:56.575 17:00:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:56.575 17:00:12 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:56.575 17:00:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:56.575 17:00:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:56.575 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:56.575 17:00:12 -- nvmf/common.sh@470 -- # nvmfpid=1673048 00:12:56.575 17:00:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.575 17:00:12 -- nvmf/common.sh@471 -- # waitforlisten 1673048 00:12:56.575 17:00:12 -- common/autotest_common.sh@817 -- # '[' -z 1673048 ']' 00:12:56.575 17:00:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.575 17:00:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:56.575 17:00:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.575 17:00:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:56.575 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:56.575 [2024-04-18 17:00:12.191346] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:12:56.575 [2024-04-18 17:00:12.191449] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.575 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.575 [2024-04-18 17:00:12.255889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.842 [2024-04-18 17:00:12.367534] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.842 [2024-04-18 17:00:12.367595] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.842 [2024-04-18 17:00:12.367608] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.842 [2024-04-18 17:00:12.367619] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.842 [2024-04-18 17:00:12.367630] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.842 [2024-04-18 17:00:12.367764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.842 [2024-04-18 17:00:12.367788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.842 [2024-04-18 17:00:12.367843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.842 [2024-04-18 17:00:12.367845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.842 17:00:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:56.842 17:00:12 -- common/autotest_common.sh@850 -- # return 0 00:12:56.842 17:00:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:56.842 17:00:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:56.842 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:56.842 17:00:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.842 17:00:12 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:56.842 17:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.842 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:56.842 [2024-04-18 17:00:12.513966] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.842 17:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.842 17:00:12 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:56.842 17:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.842 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:56.842 Malloc0 00:12:56.842 17:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.842 17:00:12 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:56.842 17:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.842 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:57.105 17:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.105 17:00:12 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:57.105 17:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.105 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:57.105 17:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.105 17:00:12 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.105 17:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.105 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:57.105 [2024-04-18 17:00:12.564951] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.105 17:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.105 17:00:12 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:57.105 test case1: single bdev can't be used in multiple subsystems 00:12:57.105 17:00:12 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:57.105 17:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.105 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:57.105 17:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.105 17:00:12 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:57.105 17:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.105 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:57.105 17:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.105 17:00:12 -- target/nmic.sh@28 -- # nmic_status=0 00:12:57.105 17:00:12 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:57.105 17:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.105 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:57.105 [2024-04-18 17:00:12.588838] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:57.105 [2024-04-18 17:00:12.588867] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:57.105 [2024-04-18 17:00:12.588891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.105 request: 00:12:57.105 { 00:12:57.105 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:57.105 "namespace": { 00:12:57.105 "bdev_name": "Malloc0", 00:12:57.105 "no_auto_visible": false 00:12:57.105 }, 00:12:57.105 "method": "nvmf_subsystem_add_ns", 00:12:57.105 "req_id": 1 00:12:57.105 } 00:12:57.105 Got JSON-RPC error response 00:12:57.105 response: 00:12:57.105 { 00:12:57.105 "code": -32602, 00:12:57.105 "message": "Invalid parameters" 00:12:57.105 } 00:12:57.105 17:00:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:12:57.105 17:00:12 -- target/nmic.sh@29 -- # nmic_status=1 00:12:57.105 17:00:12 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:57.105 17:00:12 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:57.105 Adding namespace failed - expected result. 00:12:57.105 17:00:12 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:57.105 test case2: host connect to nvmf target in multiple paths 00:12:57.105 17:00:12 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:57.105 17:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.105 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:57.105 [2024-04-18 17:00:12.596928] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:57.105 17:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.105 17:00:12 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.686 17:00:13 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:58.254 17:00:13 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.254 17:00:13 -- common/autotest_common.sh@1184 -- # local i=0 00:12:58.254 17:00:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.254 17:00:13 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:58.254 17:00:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:00.150 17:00:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:00.150 17:00:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:00.150 17:00:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.150 17:00:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:00.150 17:00:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.150 17:00:15 -- common/autotest_common.sh@1194 -- # return 0 00:13:00.150 17:00:15 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:00.150 [global] 00:13:00.150 thread=1 00:13:00.150 invalidate=1 00:13:00.150 rw=write 00:13:00.150 time_based=1 00:13:00.150 runtime=1 00:13:00.150 ioengine=libaio 00:13:00.150 direct=1 00:13:00.150 bs=4096 00:13:00.150 iodepth=1 00:13:00.150 norandommap=0 00:13:00.150 numjobs=1 00:13:00.150 00:13:00.150 verify_dump=1 00:13:00.150 verify_backlog=512 00:13:00.150 verify_state_save=0 00:13:00.150 do_verify=1 00:13:00.150 verify=crc32c-intel 00:13:00.150 [job0] 00:13:00.150 filename=/dev/nvme0n1 00:13:00.150 Could not set queue depth (nvme0n1) 00:13:00.409 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:00.409 fio-3.35 00:13:00.409 Starting 1 thread 00:13:01.786 00:13:01.786 job0: (groupid=0, jobs=1): err= 0: pid=1673566: Thu Apr 18 17:00:17 2024 00:13:01.786 read: IOPS=969, BW=3876KiB/s (3969kB/s)(4008KiB/1034msec) 00:13:01.786 slat (nsec): min=7179, max=37658, avg=13712.69, stdev=4413.51 00:13:01.786 clat (usec): min=212, max=41015, avg=798.09, stdev=4607.39 00:13:01.786 lat (usec): min=221, max=41031, avg=811.80, stdev=4607.61 00:13:01.786 clat percentiles (usec): 00:13:01.786 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 249], 00:13:01.786 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:13:01.786 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 375], 00:13:01.786 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:01.786 | 99.99th=[41157] 00:13:01.786 write: IOPS=990, BW=3961KiB/s (4056kB/s)(4096KiB/1034msec); 0 zone resets 00:13:01.786 slat (nsec): min=9166, max=69243, avg=17295.09, stdev=6184.75 00:13:01.786 clat (usec): min=154, max=403, avg=187.59, stdev=18.80 00:13:01.786 lat (usec): min=164, max=472, avg=204.88, stdev=23.68 00:13:01.786 clat percentiles (usec): 00:13:01.786 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 172], 00:13:01.786 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:13:01.786 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 210], 00:13:01.786 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 314], 99.95th=[ 404], 00:13:01.786 | 99.99th=[ 404] 00:13:01.786 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:13:01.786 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:01.786 lat (usec) : 250=60.51%, 500=38.06%, 750=0.79% 00:13:01.786 lat (msec) : 50=0.64% 00:13:01.786 cpu : usr=2.90%, sys=3.68%, ctx=2026, majf=0, minf=2 00:13:01.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:01.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.786 issued rwts: total=1002,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:01.786 00:13:01.786 Run status group 0 (all jobs): 00:13:01.786 READ: bw=3876KiB/s (3969kB/s), 3876KiB/s-3876KiB/s (3969kB/s-3969kB/s), io=4008KiB (4104kB), run=1034-1034msec 00:13:01.787 WRITE: bw=3961KiB/s (4056kB/s), 3961KiB/s-3961KiB/s (4056kB/s-4056kB/s), io=4096KiB (4194kB), run=1034-1034msec 00:13:01.787 00:13:01.787 Disk stats (read/write): 00:13:01.787 nvme0n1: ios=1048/1024, merge=0/0, ticks=657/187, in_queue=844, util=92.18% 00:13:01.787 17:00:17 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:01.787 17:00:17 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.787 17:00:17 -- common/autotest_common.sh@1205 -- # local i=0 00:13:01.787 17:00:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:01.787 17:00:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.787 17:00:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:01.787 17:00:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.787 17:00:17 -- common/autotest_common.sh@1217 -- # return 0 00:13:01.787 17:00:17 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:01.787 17:00:17 -- target/nmic.sh@53 -- # nvmftestfini 00:13:01.787 17:00:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:01.787 17:00:17 -- nvmf/common.sh@117 -- # sync 00:13:01.787 17:00:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:01.787 17:00:17 -- nvmf/common.sh@120 -- # set +e 00:13:01.787 17:00:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.787 17:00:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:01.787 rmmod nvme_tcp 00:13:01.787 rmmod nvme_fabrics 00:13:01.787 rmmod nvme_keyring 00:13:01.787 17:00:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:01.787 17:00:17 -- nvmf/common.sh@124 -- # set -e 00:13:01.787 17:00:17 -- nvmf/common.sh@125 -- # return 0 00:13:01.787 17:00:17 -- nvmf/common.sh@478 -- # '[' -n 1673048 ']' 00:13:01.787 17:00:17 -- nvmf/common.sh@479 -- # killprocess 1673048 00:13:01.787 17:00:17 -- common/autotest_common.sh@936 -- # '[' -z 1673048 ']' 00:13:01.787 17:00:17 -- common/autotest_common.sh@940 -- # kill -0 1673048 00:13:01.787 17:00:17 -- common/autotest_common.sh@941 -- # uname 00:13:01.787 17:00:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:01.787 17:00:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1673048 00:13:01.787 17:00:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:01.787 17:00:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:01.787 17:00:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1673048' 00:13:01.787 killing process with pid 1673048 00:13:01.787 17:00:17 -- common/autotest_common.sh@955 -- # kill 1673048 00:13:01.787 17:00:17 -- common/autotest_common.sh@960 -- # wait 1673048 00:13:02.045 17:00:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:02.045 17:00:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:02.045 17:00:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:02.045 17:00:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.045 17:00:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.045 17:00:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.045 17:00:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.045 17:00:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.582 17:00:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:04.582 00:13:04.582 real 0m9.837s 00:13:04.582 user 0m22.172s 00:13:04.582 sys 0m2.272s 00:13:04.582 17:00:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:04.582 17:00:19 -- common/autotest_common.sh@10 -- # set +x 00:13:04.582 ************************************ 00:13:04.582 END TEST nvmf_nmic 00:13:04.582 ************************************ 00:13:04.583 17:00:19 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:04.583 17:00:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:04.583 17:00:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.583 17:00:19 -- common/autotest_common.sh@10 -- # set +x 00:13:04.583 ************************************ 00:13:04.583 START TEST nvmf_fio_target 00:13:04.583 ************************************ 00:13:04.583 17:00:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:04.583 * Looking for test storage... 00:13:04.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.583 17:00:19 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.583 17:00:19 -- nvmf/common.sh@7 -- # uname -s 00:13:04.583 17:00:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.583 17:00:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.583 17:00:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.583 17:00:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.583 17:00:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.583 17:00:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.583 17:00:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.583 17:00:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.583 17:00:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.583 17:00:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.583 17:00:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.583 17:00:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.583 17:00:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.583 17:00:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.583 17:00:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.583 17:00:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.583 17:00:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.583 17:00:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.583 17:00:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.583 17:00:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.583 17:00:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.583 17:00:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.583 17:00:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.583 17:00:19 -- paths/export.sh@5 -- # export PATH 00:13:04.583 17:00:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.583 17:00:19 -- nvmf/common.sh@47 -- # : 0 00:13:04.583 17:00:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.583 17:00:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.583 17:00:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.583 17:00:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.583 17:00:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.583 17:00:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.583 17:00:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.583 17:00:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.583 17:00:19 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.583 17:00:19 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.583 17:00:19 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.583 17:00:19 -- target/fio.sh@16 -- # nvmftestinit 00:13:04.583 17:00:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:04.583 17:00:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.583 17:00:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:04.583 17:00:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:04.583 17:00:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:04.583 17:00:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.583 17:00:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.583 17:00:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.583 17:00:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:04.583 17:00:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:04.583 17:00:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.583 17:00:19 -- common/autotest_common.sh@10 -- # set +x 00:13:06.485 17:00:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:06.485 17:00:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:06.485 17:00:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:06.485 17:00:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:06.485 17:00:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:06.485 17:00:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:06.485 17:00:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:06.485 17:00:21 -- nvmf/common.sh@295 -- # net_devs=() 00:13:06.485 17:00:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:06.485 17:00:21 -- nvmf/common.sh@296 -- # e810=() 00:13:06.485 17:00:21 -- nvmf/common.sh@296 -- # local -ga e810 00:13:06.485 17:00:21 -- nvmf/common.sh@297 -- # x722=() 00:13:06.485 17:00:21 -- nvmf/common.sh@297 -- # local -ga x722 00:13:06.485 17:00:21 -- nvmf/common.sh@298 -- # mlx=() 00:13:06.485 17:00:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:06.485 17:00:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.485 17:00:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.485 17:00:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.485 17:00:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.485 17:00:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.485 17:00:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.485 17:00:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.485 17:00:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.485 17:00:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.485 17:00:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.485 17:00:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.485 17:00:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:06.485 17:00:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:06.485 17:00:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:06.485 17:00:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.485 17:00:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:06.485 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:06.485 17:00:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.485 17:00:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:06.485 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:06.485 17:00:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:06.485 17:00:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.485 17:00:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.485 17:00:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:06.485 17:00:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.485 17:00:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:06.485 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:06.485 17:00:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.485 17:00:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.485 17:00:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.485 17:00:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:06.485 17:00:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.485 17:00:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:06.485 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:06.485 17:00:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.485 17:00:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:06.485 17:00:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:06.485 17:00:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:06.485 17:00:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:06.485 17:00:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.485 17:00:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.485 17:00:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.485 17:00:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:06.485 17:00:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.485 17:00:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.485 17:00:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:06.485 17:00:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.485 17:00:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.485 17:00:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:06.485 17:00:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:06.485 17:00:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.485 17:00:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.485 17:00:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.486 17:00:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.486 17:00:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:06.486 17:00:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.486 17:00:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.486 17:00:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.486 17:00:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:06.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:13:06.486 00:13:06.486 --- 10.0.0.2 ping statistics --- 00:13:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.486 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:13:06.486 17:00:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:13:06.486 00:13:06.486 --- 10.0.0.1 ping statistics --- 00:13:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.486 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:13:06.486 17:00:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.486 17:00:21 -- nvmf/common.sh@411 -- # return 0 00:13:06.486 17:00:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:06.486 17:00:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.486 17:00:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:06.486 17:00:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:06.486 17:00:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.486 17:00:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:06.486 17:00:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:06.486 17:00:21 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:06.486 17:00:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:06.486 17:00:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:06.486 17:00:21 -- common/autotest_common.sh@10 -- # set +x 00:13:06.486 17:00:21 -- nvmf/common.sh@470 -- # nvmfpid=1675645 00:13:06.486 17:00:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.486 17:00:21 -- nvmf/common.sh@471 -- # waitforlisten 1675645 00:13:06.486 17:00:21 -- common/autotest_common.sh@817 -- # '[' -z 1675645 ']' 00:13:06.486 17:00:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.486 17:00:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:06.486 17:00:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.486 17:00:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:06.486 17:00:21 -- common/autotest_common.sh@10 -- # set +x 00:13:06.486 [2024-04-18 17:00:22.040630] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:13:06.486 [2024-04-18 17:00:22.040730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.486 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.486 [2024-04-18 17:00:22.110780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.744 [2024-04-18 17:00:22.232260] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.744 [2024-04-18 17:00:22.232317] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.744 [2024-04-18 17:00:22.232341] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.744 [2024-04-18 17:00:22.232355] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.744 [2024-04-18 17:00:22.232369] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.744 [2024-04-18 17:00:22.232449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.744 [2024-04-18 17:00:22.232484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.744 [2024-04-18 17:00:22.232511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.744 [2024-04-18 17:00:22.232514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.333 17:00:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:07.333 17:00:22 -- common/autotest_common.sh@850 -- # return 0 00:13:07.333 17:00:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:07.333 17:00:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:07.333 17:00:22 -- common/autotest_common.sh@10 -- # set +x 00:13:07.333 17:00:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.333 17:00:22 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:07.590 [2024-04-18 17:00:23.212094] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.590 17:00:23 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:07.847 17:00:23 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:07.847 17:00:23 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:08.105 17:00:23 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:08.105 17:00:23 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:08.363 17:00:24 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:08.363 17:00:24 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:08.621 17:00:24 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:08.621 17:00:24 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:08.879 17:00:24 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:09.136 17:00:24 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:09.136 17:00:24 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:09.394 17:00:25 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:09.394 17:00:25 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:09.651 17:00:25 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:09.651 17:00:25 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:09.908 17:00:25 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:10.166 17:00:25 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:10.166 17:00:25 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:10.424 17:00:26 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:10.424 17:00:26 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.681 17:00:26 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.940 [2024-04-18 17:00:26.508876] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.940 17:00:26 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:11.198 17:00:26 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:11.455 17:00:27 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.023 17:00:27 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:12.023 17:00:27 -- common/autotest_common.sh@1184 -- # local i=0 00:13:12.023 17:00:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.023 17:00:27 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:13:12.023 17:00:27 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:13:12.023 17:00:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:14.559 17:00:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:14.559 17:00:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:14.559 17:00:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.559 17:00:29 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:13:14.559 17:00:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.559 17:00:29 -- common/autotest_common.sh@1194 -- # return 0 00:13:14.559 17:00:29 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:14.559 [global] 00:13:14.559 thread=1 00:13:14.559 invalidate=1 00:13:14.559 rw=write 00:13:14.559 time_based=1 00:13:14.559 runtime=1 00:13:14.559 ioengine=libaio 00:13:14.559 direct=1 00:13:14.559 bs=4096 00:13:14.559 iodepth=1 00:13:14.559 norandommap=0 00:13:14.559 numjobs=1 00:13:14.559 00:13:14.559 verify_dump=1 00:13:14.559 verify_backlog=512 00:13:14.559 verify_state_save=0 00:13:14.559 do_verify=1 00:13:14.559 verify=crc32c-intel 00:13:14.559 [job0] 00:13:14.559 filename=/dev/nvme0n1 00:13:14.559 [job1] 00:13:14.559 filename=/dev/nvme0n2 00:13:14.559 [job2] 00:13:14.559 filename=/dev/nvme0n3 00:13:14.559 [job3] 00:13:14.559 filename=/dev/nvme0n4 00:13:14.559 Could not set queue depth (nvme0n1) 00:13:14.559 Could not set queue depth (nvme0n2) 00:13:14.559 Could not set queue depth (nvme0n3) 00:13:14.559 Could not set queue depth (nvme0n4) 00:13:14.559 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.559 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.559 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.559 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.559 fio-3.35 00:13:14.559 Starting 4 threads 00:13:15.504 00:13:15.504 job0: (groupid=0, jobs=1): err= 0: pid=1676730: Thu Apr 18 17:00:31 2024 00:13:15.504 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:15.504 slat (nsec): min=5800, max=59006, avg=14252.86, stdev=6387.98 00:13:15.504 clat (usec): min=266, max=580, avg=324.37, stdev=34.31 00:13:15.504 lat (usec): min=273, max=598, avg=338.63, stdev=36.17 00:13:15.505 clat percentiles (usec): 00:13:15.505 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:13:15.505 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 326], 00:13:15.505 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 367], 00:13:15.505 | 99.00th=[ 490], 99.50th=[ 510], 99.90th=[ 570], 99.95th=[ 578], 00:13:15.505 | 99.99th=[ 578] 00:13:15.505 write: IOPS=1861, BW=7445KiB/s (7623kB/s)(7452KiB/1001msec); 0 zone resets 00:13:15.505 slat (nsec): min=7472, max=66551, avg=17337.97, stdev=8476.62 00:13:15.505 clat (usec): min=157, max=472, avg=231.74, stdev=44.09 00:13:15.505 lat (usec): min=165, max=499, avg=249.08, stdev=44.49 00:13:15.505 clat percentiles (usec): 00:13:15.505 | 1.00th=[ 172], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 204], 00:13:15.505 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:13:15.505 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 273], 95.00th=[ 330], 00:13:15.505 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 474], 99.95th=[ 474], 00:13:15.505 | 99.99th=[ 474] 00:13:15.505 bw ( KiB/s): min= 8192, max= 8192, per=45.03%, avg=8192.00, stdev= 0.00, samples=1 00:13:15.505 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:15.505 lat (usec) : 250=44.13%, 500=55.52%, 750=0.35% 00:13:15.505 cpu : usr=4.60%, sys=6.70%, ctx=3399, majf=0, minf=1 00:13:15.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:15.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.505 issued rwts: total=1536,1863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:15.505 job1: (groupid=0, jobs=1): err= 0: pid=1676731: Thu Apr 18 17:00:31 2024 00:13:15.505 read: IOPS=22, BW=89.0KiB/s (91.1kB/s)(92.0KiB/1034msec) 00:13:15.505 slat (nsec): min=7402, max=45016, avg=19942.83, stdev=8853.25 00:13:15.505 clat (usec): min=404, max=42033, avg=39661.96, stdev=8571.34 00:13:15.505 lat (usec): min=422, max=42051, avg=39681.90, stdev=8571.75 00:13:15.505 clat percentiles (usec): 00:13:15.505 | 1.00th=[ 404], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:15.505 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:13:15.505 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:15.505 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:15.505 | 99.99th=[42206] 00:13:15.505 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:13:15.505 slat (nsec): min=5970, max=35357, avg=7946.90, stdev=3105.66 00:13:15.505 clat (usec): min=158, max=474, avg=227.22, stdev=41.83 00:13:15.505 lat (usec): min=164, max=484, avg=235.17, stdev=42.18 00:13:15.505 clat percentiles (usec): 00:13:15.505 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:13:15.505 | 30.00th=[ 198], 40.00th=[ 208], 50.00th=[ 225], 60.00th=[ 233], 00:13:15.505 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 306], 00:13:15.505 | 99.00th=[ 375], 99.50th=[ 424], 99.90th=[ 474], 99.95th=[ 474], 00:13:15.505 | 99.99th=[ 474] 00:13:15.505 bw ( KiB/s): min= 4096, max= 4096, per=22.51%, avg=4096.00, stdev= 0.00, samples=1 00:13:15.505 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:15.505 lat (usec) : 250=74.21%, 500=21.68% 00:13:15.505 lat (msec) : 50=4.11% 00:13:15.505 cpu : usr=0.19%, sys=0.39%, ctx=535, majf=0, minf=1 00:13:15.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:15.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.505 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:15.505 job2: (groupid=0, jobs=1): err= 0: pid=1676732: Thu Apr 18 17:00:31 2024 00:13:15.505 read: IOPS=22, BW=89.9KiB/s (92.1kB/s)(92.0KiB/1023msec) 00:13:15.505 slat (nsec): min=7767, max=35280, avg=20176.61, stdev=6562.30 00:13:15.505 clat (usec): min=337, max=41266, avg=39205.19, stdev=8473.61 00:13:15.505 lat (usec): min=372, max=41274, avg=39225.36, stdev=8470.26 00:13:15.505 clat percentiles (usec): 00:13:15.505 | 1.00th=[ 338], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:15.505 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:15.505 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:15.505 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:15.505 | 99.99th=[41157] 00:13:15.505 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:13:15.505 slat (nsec): min=7668, max=30925, avg=9330.98, stdev=2705.43 00:13:15.505 clat (usec): min=180, max=645, avg=224.20, stdev=37.54 00:13:15.505 lat (usec): min=187, max=653, avg=233.53, stdev=37.72 00:13:15.505 clat percentiles (usec): 00:13:15.505 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 196], 00:13:15.505 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 219], 60.00th=[ 233], 00:13:15.505 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 269], 00:13:15.505 | 99.00th=[ 371], 99.50th=[ 437], 99.90th=[ 644], 99.95th=[ 644], 00:13:15.505 | 99.99th=[ 644] 00:13:15.505 bw ( KiB/s): min= 4096, max= 4096, per=22.51%, avg=4096.00, stdev= 0.00, samples=1 00:13:15.505 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:15.505 lat (usec) : 250=78.88%, 500=16.82%, 750=0.19% 00:13:15.505 lat (msec) : 50=4.11% 00:13:15.505 cpu : usr=0.49%, sys=0.49%, ctx=535, majf=0, minf=1 00:13:15.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:15.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.505 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:15.505 job3: (groupid=0, jobs=1): err= 0: pid=1676733: Thu Apr 18 17:00:31 2024 00:13:15.505 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:15.505 slat (nsec): min=4729, max=67366, avg=17818.52, stdev=10225.46 00:13:15.505 clat (usec): min=266, max=564, avg=336.18, stdev=45.17 00:13:15.505 lat (usec): min=278, max=577, avg=353.99, stdev=47.73 00:13:15.505 clat percentiles (usec): 00:13:15.505 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:13:15.505 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 338], 00:13:15.505 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 416], 00:13:15.505 | 99.00th=[ 502], 99.50th=[ 523], 99.90th=[ 553], 99.95th=[ 562], 00:13:15.505 | 99.99th=[ 562] 00:13:15.505 write: IOPS=1814, BW=7257KiB/s (7431kB/s)(7264KiB/1001msec); 0 zone resets 00:13:15.505 slat (nsec): min=6407, max=70932, avg=13383.81, stdev=6653.56 00:13:15.505 clat (usec): min=191, max=442, avg=229.69, stdev=22.07 00:13:15.505 lat (usec): min=203, max=450, avg=243.08, stdev=22.82 00:13:15.505 clat percentiles (usec): 00:13:15.505 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:13:15.505 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:13:15.505 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 265], 00:13:15.505 | 99.00th=[ 302], 99.50th=[ 355], 99.90th=[ 400], 99.95th=[ 445], 00:13:15.505 | 99.99th=[ 445] 00:13:15.505 bw ( KiB/s): min= 8192, max= 8192, per=45.03%, avg=8192.00, stdev= 0.00, samples=1 00:13:15.505 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:15.505 lat (usec) : 250=47.34%, 500=52.15%, 750=0.51% 00:13:15.505 cpu : usr=3.50%, sys=4.80%, ctx=3352, majf=0, minf=2 00:13:15.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:15.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.505 issued rwts: total=1536,1816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:15.505 00:13:15.505 Run status group 0 (all jobs): 00:13:15.505 READ: bw=11.8MiB/s (12.4MB/s), 89.0KiB/s-6138KiB/s (91.1kB/s-6285kB/s), io=12.2MiB (12.8MB), run=1001-1034msec 00:13:15.505 WRITE: bw=17.8MiB/s (18.6MB/s), 1981KiB/s-7445KiB/s (2028kB/s-7623kB/s), io=18.4MiB (19.3MB), run=1001-1034msec 00:13:15.505 00:13:15.505 Disk stats (read/write): 00:13:15.505 nvme0n1: ios=1378/1536, merge=0/0, ticks=435/322, in_queue=757, util=86.47% 00:13:15.505 nvme0n2: ios=42/512, merge=0/0, ticks=726/115, in_queue=841, util=86.86% 00:13:15.505 nvme0n3: ios=17/512, merge=0/0, ticks=698/114, in_queue=812, util=88.88% 00:13:15.505 nvme0n4: ios=1304/1536, merge=0/0, ticks=433/333, in_queue=766, util=89.63% 00:13:15.505 17:00:31 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:15.505 [global] 00:13:15.505 thread=1 00:13:15.505 invalidate=1 00:13:15.505 rw=randwrite 00:13:15.505 time_based=1 00:13:15.505 runtime=1 00:13:15.505 ioengine=libaio 00:13:15.505 direct=1 00:13:15.505 bs=4096 00:13:15.505 iodepth=1 00:13:15.505 norandommap=0 00:13:15.505 numjobs=1 00:13:15.505 00:13:15.505 verify_dump=1 00:13:15.505 verify_backlog=512 00:13:15.505 verify_state_save=0 00:13:15.505 do_verify=1 00:13:15.505 verify=crc32c-intel 00:13:15.505 [job0] 00:13:15.505 filename=/dev/nvme0n1 00:13:15.505 [job1] 00:13:15.505 filename=/dev/nvme0n2 00:13:15.505 [job2] 00:13:15.505 filename=/dev/nvme0n3 00:13:15.505 [job3] 00:13:15.505 filename=/dev/nvme0n4 00:13:15.505 Could not set queue depth (nvme0n1) 00:13:15.765 Could not set queue depth (nvme0n2) 00:13:15.765 Could not set queue depth (nvme0n3) 00:13:15.765 Could not set queue depth (nvme0n4) 00:13:15.765 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:15.765 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:15.765 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:15.765 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:15.765 fio-3.35 00:13:15.765 Starting 4 threads 00:13:17.143 00:13:17.143 job0: (groupid=0, jobs=1): err= 0: pid=1677077: Thu Apr 18 17:00:32 2024 00:13:17.143 read: IOPS=991, BW=3965KiB/s (4060kB/s)(4112KiB/1037msec) 00:13:17.143 slat (nsec): min=6284, max=63007, avg=20380.06, stdev=8268.42 00:13:17.143 clat (usec): min=266, max=41043, avg=600.26, stdev=2958.28 00:13:17.143 lat (usec): min=273, max=41061, avg=620.64, stdev=2957.85 00:13:17.143 clat percentiles (usec): 00:13:17.143 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 326], 00:13:17.143 | 30.00th=[ 355], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 383], 00:13:17.143 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 461], 95.00th=[ 498], 00:13:17.143 | 99.00th=[ 594], 99.50th=[28705], 99.90th=[41157], 99.95th=[41157], 00:13:17.143 | 99.99th=[41157] 00:13:17.143 write: IOPS=1481, BW=5925KiB/s (6067kB/s)(6144KiB/1037msec); 0 zone resets 00:13:17.143 slat (usec): min=7, max=6719, avg=20.42, stdev=171.19 00:13:17.143 clat (usec): min=165, max=513, avg=230.45, stdev=29.12 00:13:17.143 lat (usec): min=177, max=7042, avg=250.86, stdev=176.13 00:13:17.143 clat percentiles (usec): 00:13:17.143 | 1.00th=[ 174], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 210], 00:13:17.143 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:13:17.143 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 285], 00:13:17.143 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 416], 99.95th=[ 515], 00:13:17.143 | 99.99th=[ 515] 00:13:17.143 bw ( KiB/s): min= 4096, max= 8192, per=25.93%, avg=6144.00, stdev=2896.31, samples=2 00:13:17.143 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:13:17.143 lat (usec) : 250=49.77%, 500=48.32%, 750=1.68% 00:13:17.143 lat (msec) : 50=0.23% 00:13:17.143 cpu : usr=3.19%, sys=5.21%, ctx=2566, majf=0, minf=1 00:13:17.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.143 issued rwts: total=1028,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:17.143 job1: (groupid=0, jobs=1): err= 0: pid=1677078: Thu Apr 18 17:00:32 2024 00:13:17.143 read: IOPS=1501, BW=6006KiB/s (6150kB/s)(6012KiB/1001msec) 00:13:17.143 slat (nsec): min=5950, max=45488, avg=15531.21, stdev=5172.94 00:13:17.143 clat (usec): min=298, max=1074, avg=357.96, stdev=28.29 00:13:17.143 lat (usec): min=308, max=1095, avg=373.49, stdev=29.93 00:13:17.143 clat percentiles (usec): 00:13:17.143 | 1.00th=[ 314], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:13:17.143 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 363], 00:13:17.143 | 70.00th=[ 367], 80.00th=[ 371], 90.00th=[ 379], 95.00th=[ 383], 00:13:17.143 | 99.00th=[ 404], 99.50th=[ 433], 99.90th=[ 668], 99.95th=[ 1074], 00:13:17.143 | 99.99th=[ 1074] 00:13:17.143 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:17.143 slat (nsec): min=7640, max=65230, avg=18318.78, stdev=8679.75 00:13:17.143 clat (usec): min=190, max=951, avg=257.45, stdev=53.02 00:13:17.143 lat (usec): min=207, max=964, avg=275.77, stdev=57.24 00:13:17.143 clat percentiles (usec): 00:13:17.143 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 227], 00:13:17.143 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 249], 00:13:17.143 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 318], 95.00th=[ 388], 00:13:17.143 | 99.00th=[ 424], 99.50th=[ 433], 99.90th=[ 775], 99.95th=[ 955], 00:13:17.143 | 99.99th=[ 955] 00:13:17.143 bw ( KiB/s): min= 8192, max= 8192, per=34.57%, avg=8192.00, stdev= 0.00, samples=1 00:13:17.143 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:17.143 lat (usec) : 250=32.08%, 500=67.65%, 750=0.16%, 1000=0.07% 00:13:17.143 lat (msec) : 2=0.03% 00:13:17.143 cpu : usr=3.70%, sys=7.20%, ctx=3040, majf=0, minf=2 00:13:17.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.143 issued rwts: total=1503,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:17.143 job2: (groupid=0, jobs=1): err= 0: pid=1677079: Thu Apr 18 17:00:32 2024 00:13:17.143 read: IOPS=1356, BW=5427KiB/s (5557kB/s)(5432KiB/1001msec) 00:13:17.143 slat (nsec): min=6162, max=56543, avg=17351.87, stdev=7162.08 00:13:17.143 clat (usec): min=243, max=40877, avg=423.45, stdev=1403.60 00:13:17.143 lat (usec): min=249, max=40886, avg=440.80, stdev=1403.68 00:13:17.143 clat percentiles (usec): 00:13:17.143 | 1.00th=[ 262], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 310], 00:13:17.143 | 30.00th=[ 334], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 375], 00:13:17.143 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 482], 95.00th=[ 510], 00:13:17.143 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[32375], 99.95th=[40633], 00:13:17.143 | 99.99th=[40633] 00:13:17.143 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:17.143 slat (nsec): min=7403, max=57022, avg=15906.76, stdev=6645.17 00:13:17.143 clat (usec): min=167, max=440, avg=236.24, stdev=37.42 00:13:17.143 lat (usec): min=175, max=465, avg=252.14, stdev=39.62 00:13:17.143 clat percentiles (usec): 00:13:17.143 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 208], 00:13:17.143 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:13:17.143 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 285], 95.00th=[ 318], 00:13:17.143 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 441], 99.95th=[ 441], 00:13:17.143 | 99.99th=[ 441] 00:13:17.143 bw ( KiB/s): min= 8192, max= 8192, per=34.57%, avg=8192.00, stdev= 0.00, samples=1 00:13:17.143 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:17.143 lat (usec) : 250=41.36%, 500=55.18%, 750=3.39% 00:13:17.143 lat (msec) : 50=0.07% 00:13:17.143 cpu : usr=4.00%, sys=5.90%, ctx=2895, majf=0, minf=1 00:13:17.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.143 issued rwts: total=1358,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:17.143 job3: (groupid=0, jobs=1): err= 0: pid=1677080: Thu Apr 18 17:00:32 2024 00:13:17.143 read: IOPS=1052, BW=4209KiB/s (4310kB/s)(4356KiB/1035msec) 00:13:17.143 slat (nsec): min=6378, max=52104, avg=18498.40, stdev=6535.25 00:13:17.143 clat (usec): min=273, max=41092, avg=561.41, stdev=2745.45 00:13:17.143 lat (usec): min=294, max=41110, avg=579.91, stdev=2745.37 00:13:17.143 clat percentiles (usec): 00:13:17.143 | 1.00th=[ 289], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:13:17.143 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 363], 00:13:17.143 | 70.00th=[ 388], 80.00th=[ 429], 90.00th=[ 515], 95.00th=[ 553], 00:13:17.143 | 99.00th=[ 627], 99.50th=[ 1811], 99.90th=[41157], 99.95th=[41157], 00:13:17.143 | 99.99th=[41157] 00:13:17.143 write: IOPS=1484, BW=5936KiB/s (6079kB/s)(6144KiB/1035msec); 0 zone resets 00:13:17.143 slat (nsec): min=6376, max=59200, avg=15931.14, stdev=6934.68 00:13:17.143 clat (usec): min=171, max=415, avg=237.93, stdev=30.63 00:13:17.143 lat (usec): min=184, max=436, avg=253.86, stdev=33.49 00:13:17.143 clat percentiles (usec): 00:13:17.143 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 217], 00:13:17.143 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:13:17.143 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 293], 00:13:17.143 | 99.00th=[ 359], 99.50th=[ 383], 99.90th=[ 412], 99.95th=[ 416], 00:13:17.143 | 99.99th=[ 416] 00:13:17.143 bw ( KiB/s): min= 4096, max= 8192, per=25.93%, avg=6144.00, stdev=2896.31, samples=2 00:13:17.143 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:13:17.143 lat (usec) : 250=44.88%, 500=49.41%, 750=5.49% 00:13:17.143 lat (msec) : 2=0.04%, 50=0.19% 00:13:17.143 cpu : usr=2.80%, sys=6.09%, ctx=2625, majf=0, minf=1 00:13:17.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.143 issued rwts: total=1089,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:17.143 00:13:17.143 Run status group 0 (all jobs): 00:13:17.143 READ: bw=18.8MiB/s (19.7MB/s), 3965KiB/s-6006KiB/s (4060kB/s-6150kB/s), io=19.4MiB (20.4MB), run=1001-1037msec 00:13:17.143 WRITE: bw=23.1MiB/s (24.3MB/s), 5925KiB/s-6138KiB/s (6067kB/s-6285kB/s), io=24.0MiB (25.2MB), run=1001-1037msec 00:13:17.144 00:13:17.144 Disk stats (read/write): 00:13:17.144 nvme0n1: ios=1066/1444, merge=0/0, ticks=1607/313, in_queue=1920, util=98.90% 00:13:17.144 nvme0n2: ios=1147/1536, merge=0/0, ticks=560/380, in_queue=940, util=97.66% 00:13:17.144 nvme0n3: ios=1068/1536, merge=0/0, ticks=1368/348, in_queue=1716, util=97.71% 00:13:17.144 nvme0n4: ios=1081/1536, merge=0/0, ticks=708/347, in_queue=1055, util=95.17% 00:13:17.144 17:00:32 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:17.144 [global] 00:13:17.144 thread=1 00:13:17.144 invalidate=1 00:13:17.144 rw=write 00:13:17.144 time_based=1 00:13:17.144 runtime=1 00:13:17.144 ioengine=libaio 00:13:17.144 direct=1 00:13:17.144 bs=4096 00:13:17.144 iodepth=128 00:13:17.144 norandommap=0 00:13:17.144 numjobs=1 00:13:17.144 00:13:17.144 verify_dump=1 00:13:17.144 verify_backlog=512 00:13:17.144 verify_state_save=0 00:13:17.144 do_verify=1 00:13:17.144 verify=crc32c-intel 00:13:17.144 [job0] 00:13:17.144 filename=/dev/nvme0n1 00:13:17.144 [job1] 00:13:17.144 filename=/dev/nvme0n2 00:13:17.144 [job2] 00:13:17.144 filename=/dev/nvme0n3 00:13:17.144 [job3] 00:13:17.144 filename=/dev/nvme0n4 00:13:17.144 Could not set queue depth (nvme0n1) 00:13:17.144 Could not set queue depth (nvme0n2) 00:13:17.144 Could not set queue depth (nvme0n3) 00:13:17.144 Could not set queue depth (nvme0n4) 00:13:17.402 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:17.402 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:17.402 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:17.402 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:17.402 fio-3.35 00:13:17.402 Starting 4 threads 00:13:18.777 00:13:18.777 job0: (groupid=0, jobs=1): err= 0: pid=1677311: Thu Apr 18 17:00:34 2024 00:13:18.777 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:13:18.777 slat (usec): min=2, max=10681, avg=89.50, stdev=631.61 00:13:18.777 clat (usec): min=3187, max=22706, avg=11511.33, stdev=2804.43 00:13:18.777 lat (usec): min=3194, max=22729, avg=11600.83, stdev=2845.69 00:13:18.777 clat percentiles (usec): 00:13:18.777 | 1.00th=[ 4817], 5.00th=[ 7832], 10.00th=[ 9503], 20.00th=[10028], 00:13:18.777 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[11076], 00:13:18.777 | 70.00th=[11863], 80.00th=[13173], 90.00th=[15795], 95.00th=[17433], 00:13:18.777 | 99.00th=[20055], 99.50th=[20841], 99.90th=[21890], 99.95th=[21890], 00:13:18.777 | 99.99th=[22676] 00:13:18.777 write: IOPS=5980, BW=23.4MiB/s (24.5MB/s)(23.6MiB/1011msec); 0 zone resets 00:13:18.777 slat (usec): min=3, max=9074, avg=72.84, stdev=379.89 00:13:18.777 clat (usec): min=280, max=25649, avg=10500.05, stdev=2938.44 00:13:18.777 lat (usec): min=738, max=25653, avg=10572.90, stdev=2958.84 00:13:18.777 clat percentiles (usec): 00:13:18.777 | 1.00th=[ 2540], 5.00th=[ 4948], 10.00th=[ 6456], 20.00th=[ 8356], 00:13:18.777 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11338], 60.00th=[11469], 00:13:18.777 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12780], 95.00th=[14746], 00:13:18.777 | 99.00th=[19530], 99.50th=[21103], 99.90th=[22676], 99.95th=[22676], 00:13:18.777 | 99.99th=[25560] 00:13:18.777 bw ( KiB/s): min=23096, max=24304, per=35.10%, avg=23700.00, stdev=854.18, samples=2 00:13:18.777 iops : min= 5774, max= 6076, avg=5925.00, stdev=213.55, samples=2 00:13:18.777 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.01% 00:13:18.777 lat (msec) : 2=0.31%, 4=1.37%, 10=22.00%, 20=75.23%, 50=1.03% 00:13:18.778 cpu : usr=6.73%, sys=8.61%, ctx=646, majf=0, minf=1 00:13:18.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:18.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.778 issued rwts: total=5632,6046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.778 job1: (groupid=0, jobs=1): err= 0: pid=1677312: Thu Apr 18 17:00:34 2024 00:13:18.778 read: IOPS=2900, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1007msec) 00:13:18.778 slat (usec): min=2, max=10490, avg=140.72, stdev=834.00 00:13:18.778 clat (usec): min=5181, max=44765, avg=18772.95, stdev=6573.59 00:13:18.778 lat (usec): min=7449, max=47548, avg=18913.67, stdev=6651.40 00:13:18.778 clat percentiles (usec): 00:13:18.778 | 1.00th=[ 9372], 5.00th=[11731], 10.00th=[12256], 20.00th=[12780], 00:13:18.778 | 30.00th=[13698], 40.00th=[15664], 50.00th=[16188], 60.00th=[19268], 00:13:18.778 | 70.00th=[23987], 80.00th=[24511], 90.00th=[28443], 95.00th=[29754], 00:13:18.778 | 99.00th=[35914], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:13:18.778 | 99.99th=[44827] 00:13:18.778 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:13:18.778 slat (usec): min=3, max=12831, avg=179.15, stdev=960.96 00:13:18.778 clat (usec): min=5806, max=66068, avg=23562.07, stdev=15202.59 00:13:18.778 lat (usec): min=5811, max=66109, avg=23741.22, stdev=15313.52 00:13:18.778 clat percentiles (usec): 00:13:18.778 | 1.00th=[ 6783], 5.00th=[ 8094], 10.00th=[11207], 20.00th=[12387], 00:13:18.778 | 30.00th=[12780], 40.00th=[16581], 50.00th=[18482], 60.00th=[22152], 00:13:18.778 | 70.00th=[24773], 80.00th=[29230], 90.00th=[51643], 95.00th=[59507], 00:13:18.778 | 99.00th=[64226], 99.50th=[65274], 99.90th=[65799], 99.95th=[65799], 00:13:18.778 | 99.99th=[66323] 00:13:18.778 bw ( KiB/s): min=12288, max=12288, per=18.20%, avg=12288.00, stdev= 0.00, samples=2 00:13:18.778 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:13:18.778 lat (msec) : 10=4.96%, 20=53.65%, 50=35.79%, 100=5.61% 00:13:18.778 cpu : usr=3.28%, sys=4.37%, ctx=257, majf=0, minf=1 00:13:18.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:13:18.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.778 issued rwts: total=2921,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.778 job2: (groupid=0, jobs=1): err= 0: pid=1677313: Thu Apr 18 17:00:34 2024 00:13:18.778 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:13:18.778 slat (usec): min=3, max=12927, avg=112.88, stdev=814.93 00:13:18.778 clat (usec): min=4424, max=27705, avg=14111.22, stdev=3486.36 00:13:18.778 lat (usec): min=4432, max=27735, avg=14224.10, stdev=3536.62 00:13:18.778 clat percentiles (usec): 00:13:18.778 | 1.00th=[ 5407], 5.00th=[ 9241], 10.00th=[10683], 20.00th=[12387], 00:13:18.778 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13566], 00:13:18.778 | 70.00th=[14484], 80.00th=[15926], 90.00th=[19530], 95.00th=[21627], 00:13:18.778 | 99.00th=[24249], 99.50th=[24773], 99.90th=[26346], 99.95th=[26346], 00:13:18.778 | 99.99th=[27657] 00:13:18.778 write: IOPS=5020, BW=19.6MiB/s (20.6MB/s)(19.9MiB/1013msec); 0 zone resets 00:13:18.778 slat (usec): min=4, max=12548, avg=85.51, stdev=489.00 00:13:18.778 clat (usec): min=433, max=28101, avg=12425.20, stdev=3410.61 00:13:18.778 lat (usec): min=1043, max=28115, avg=12510.71, stdev=3449.42 00:13:18.778 clat percentiles (usec): 00:13:18.778 | 1.00th=[ 2933], 5.00th=[ 5473], 10.00th=[ 7439], 20.00th=[10552], 00:13:18.778 | 30.00th=[11994], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:13:18.778 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14877], 95.00th=[16581], 00:13:18.778 | 99.00th=[22938], 99.50th=[24249], 99.90th=[26084], 99.95th=[27132], 00:13:18.778 | 99.99th=[28181] 00:13:18.778 bw ( KiB/s): min=19192, max=20472, per=29.37%, avg=19832.00, stdev=905.10, samples=2 00:13:18.778 iops : min= 4798, max= 5118, avg=4958.00, stdev=226.27, samples=2 00:13:18.778 lat (usec) : 500=0.01% 00:13:18.778 lat (msec) : 2=0.33%, 4=0.64%, 10=11.61%, 20=82.41%, 50=5.00% 00:13:18.778 cpu : usr=5.24%, sys=8.40%, ctx=549, majf=0, minf=1 00:13:18.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:18.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.778 issued rwts: total=4608,5086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.778 job3: (groupid=0, jobs=1): err= 0: pid=1677314: Thu Apr 18 17:00:34 2024 00:13:18.778 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:13:18.778 slat (usec): min=3, max=10510, avg=154.21, stdev=893.13 00:13:18.778 clat (usec): min=10779, max=43446, avg=19620.48, stdev=5310.62 00:13:18.778 lat (usec): min=10786, max=43451, avg=19774.70, stdev=5387.17 00:13:18.778 clat percentiles (usec): 00:13:18.778 | 1.00th=[12256], 5.00th=[13698], 10.00th=[15401], 20.00th=[15795], 00:13:18.778 | 30.00th=[16319], 40.00th=[16712], 50.00th=[16909], 60.00th=[18482], 00:13:18.778 | 70.00th=[21365], 80.00th=[24511], 90.00th=[27395], 95.00th=[29492], 00:13:18.778 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:13:18.778 | 99.99th=[43254] 00:13:18.778 write: IOPS=2863, BW=11.2MiB/s (11.7MB/s)(11.3MiB/1011msec); 0 zone resets 00:13:18.778 slat (usec): min=3, max=13739, avg=201.90, stdev=1170.07 00:13:18.778 clat (msec): min=7, max=117, avg=26.79, stdev=17.39 00:13:18.778 lat (msec): min=9, max=117, avg=26.99, stdev=17.50 00:13:18.778 clat percentiles (msec): 00:13:18.778 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 16], 00:13:18.778 | 30.00th=[ 17], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 25], 00:13:18.778 | 70.00th=[ 26], 80.00th=[ 30], 90.00th=[ 43], 95.00th=[ 58], 00:13:18.778 | 99.00th=[ 107], 99.50th=[ 111], 99.90th=[ 118], 99.95th=[ 118], 00:13:18.778 | 99.99th=[ 118] 00:13:18.778 bw ( KiB/s): min= 9848, max=12288, per=16.39%, avg=11068.00, stdev=1725.34, samples=2 00:13:18.778 iops : min= 2462, max= 3072, avg=2767.00, stdev=431.34, samples=2 00:13:18.778 lat (msec) : 10=0.13%, 20=49.99%, 50=46.43%, 100=2.60%, 250=0.84% 00:13:18.778 cpu : usr=3.17%, sys=4.55%, ctx=217, majf=0, minf=1 00:13:18.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:18.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.778 issued rwts: total=2560,2895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.778 00:13:18.778 Run status group 0 (all jobs): 00:13:18.778 READ: bw=60.6MiB/s (63.6MB/s), 9.89MiB/s-21.8MiB/s (10.4MB/s-22.8MB/s), io=61.4MiB (64.4MB), run=1007-1013msec 00:13:18.778 WRITE: bw=65.9MiB/s (69.1MB/s), 11.2MiB/s-23.4MiB/s (11.7MB/s-24.5MB/s), io=66.8MiB (70.0MB), run=1007-1013msec 00:13:18.778 00:13:18.778 Disk stats (read/write): 00:13:18.778 nvme0n1: ios=4775/5120, merge=0/0, ticks=52073/51432, in_queue=103505, util=86.87% 00:13:18.778 nvme0n2: ios=2073/2487, merge=0/0, ticks=19553/28764, in_queue=48317, util=97.36% 00:13:18.778 nvme0n3: ios=3989/4096, merge=0/0, ticks=54891/48063, in_queue=102954, util=98.12% 00:13:18.778 nvme0n4: ios=2538/2560, merge=0/0, ticks=24942/26949, in_queue=51891, util=89.58% 00:13:18.778 17:00:34 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:18.778 [global] 00:13:18.778 thread=1 00:13:18.778 invalidate=1 00:13:18.778 rw=randwrite 00:13:18.778 time_based=1 00:13:18.778 runtime=1 00:13:18.778 ioengine=libaio 00:13:18.778 direct=1 00:13:18.778 bs=4096 00:13:18.778 iodepth=128 00:13:18.778 norandommap=0 00:13:18.778 numjobs=1 00:13:18.778 00:13:18.778 verify_dump=1 00:13:18.778 verify_backlog=512 00:13:18.778 verify_state_save=0 00:13:18.778 do_verify=1 00:13:18.778 verify=crc32c-intel 00:13:18.778 [job0] 00:13:18.778 filename=/dev/nvme0n1 00:13:18.778 [job1] 00:13:18.778 filename=/dev/nvme0n2 00:13:18.778 [job2] 00:13:18.778 filename=/dev/nvme0n3 00:13:18.778 [job3] 00:13:18.778 filename=/dev/nvme0n4 00:13:18.778 Could not set queue depth (nvme0n1) 00:13:18.778 Could not set queue depth (nvme0n2) 00:13:18.778 Could not set queue depth (nvme0n3) 00:13:18.778 Could not set queue depth (nvme0n4) 00:13:18.779 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:18.779 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:18.779 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:18.779 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:18.779 fio-3.35 00:13:18.779 Starting 4 threads 00:13:20.153 00:13:20.153 job0: (groupid=0, jobs=1): err= 0: pid=1677540: Thu Apr 18 17:00:35 2024 00:13:20.153 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:13:20.153 slat (usec): min=3, max=14319, avg=119.87, stdev=708.67 00:13:20.153 clat (usec): min=8393, max=48844, avg=15047.65, stdev=6029.37 00:13:20.153 lat (usec): min=8402, max=48849, avg=15167.52, stdev=6062.56 00:13:20.153 clat percentiles (usec): 00:13:20.153 | 1.00th=[ 9896], 5.00th=[10814], 10.00th=[11994], 20.00th=[12518], 00:13:20.153 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:13:20.153 | 70.00th=[13698], 80.00th=[15926], 90.00th=[21365], 95.00th=[23200], 00:13:20.153 | 99.00th=[45351], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:13:20.153 | 99.99th=[49021] 00:13:20.153 write: IOPS=4459, BW=17.4MiB/s (18.3MB/s)(17.5MiB/1003msec); 0 zone resets 00:13:20.153 slat (usec): min=3, max=6951, avg=104.94, stdev=550.33 00:13:20.153 clat (usec): min=2832, max=30465, avg=14488.08, stdev=4103.49 00:13:20.153 lat (usec): min=2848, max=33793, avg=14593.02, stdev=4107.19 00:13:20.153 clat percentiles (usec): 00:13:20.153 | 1.00th=[ 6980], 5.00th=[ 9896], 10.00th=[11600], 20.00th=[12518], 00:13:20.153 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:13:20.153 | 70.00th=[14091], 80.00th=[16909], 90.00th=[20055], 95.00th=[23200], 00:13:20.153 | 99.00th=[27132], 99.50th=[30016], 99.90th=[30540], 99.95th=[30540], 00:13:20.153 | 99.99th=[30540] 00:13:20.153 bw ( KiB/s): min=15406, max=19392, per=25.13%, avg=17399.00, stdev=2818.53, samples=2 00:13:20.153 iops : min= 3851, max= 4848, avg=4349.50, stdev=704.99, samples=2 00:13:20.153 lat (msec) : 4=0.49%, 10=3.45%, 20=84.84%, 50=11.21% 00:13:20.153 cpu : usr=5.49%, sys=6.99%, ctx=396, majf=0, minf=13 00:13:20.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:20.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:20.153 issued rwts: total=4096,4473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:20.153 job1: (groupid=0, jobs=1): err= 0: pid=1677541: Thu Apr 18 17:00:35 2024 00:13:20.153 read: IOPS=4238, BW=16.6MiB/s (17.4MB/s)(17.3MiB/1042msec) 00:13:20.153 slat (usec): min=2, max=13065, avg=105.21, stdev=587.56 00:13:20.153 clat (usec): min=6259, max=55411, avg=14473.78, stdev=6821.15 00:13:20.153 lat (usec): min=6270, max=55417, avg=14578.99, stdev=6836.14 00:13:20.153 clat percentiles (usec): 00:13:20.153 | 1.00th=[ 7832], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[11863], 00:13:20.153 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12911], 00:13:20.153 | 70.00th=[13304], 80.00th=[15008], 90.00th=[19530], 95.00th=[25035], 00:13:20.153 | 99.00th=[51643], 99.50th=[51643], 99.90th=[55313], 99.95th=[55313], 00:13:20.153 | 99.99th=[55313] 00:13:20.153 write: IOPS=4422, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1042msec); 0 zone resets 00:13:20.153 slat (usec): min=4, max=6708, avg=105.91, stdev=465.84 00:13:20.153 clat (usec): min=5806, max=33822, avg=14740.74, stdev=5995.84 00:13:20.153 lat (usec): min=5829, max=33829, avg=14846.65, stdev=6031.49 00:13:20.153 clat percentiles (usec): 00:13:20.153 | 1.00th=[ 7046], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[11863], 00:13:20.153 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:13:20.153 | 70.00th=[13173], 80.00th=[16057], 90.00th=[23987], 95.00th=[31065], 00:13:20.153 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:13:20.153 | 99.99th=[33817] 00:13:20.153 bw ( KiB/s): min=16384, max=20480, per=26.62%, avg=18432.00, stdev=2896.31, samples=2 00:13:20.153 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:13:20.153 lat (msec) : 10=5.61%, 20=82.63%, 50=11.14%, 100=0.63% 00:13:20.153 cpu : usr=4.03%, sys=7.30%, ctx=608, majf=0, minf=11 00:13:20.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:20.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:20.153 issued rwts: total=4417,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:20.153 job2: (groupid=0, jobs=1): err= 0: pid=1677542: Thu Apr 18 17:00:35 2024 00:13:20.153 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:13:20.153 slat (usec): min=2, max=11450, avg=116.67, stdev=579.72 00:13:20.153 clat (usec): min=3881, max=31315, avg=15102.76, stdev=3045.69 00:13:20.153 lat (usec): min=3887, max=31321, avg=15219.43, stdev=3027.61 00:13:20.153 clat percentiles (usec): 00:13:20.153 | 1.00th=[ 8455], 5.00th=[11731], 10.00th=[12518], 20.00th=[13566], 00:13:20.153 | 30.00th=[14091], 40.00th=[14615], 50.00th=[15008], 60.00th=[15139], 00:13:20.153 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16712], 95.00th=[21365], 00:13:20.153 | 99.00th=[28181], 99.50th=[28181], 99.90th=[31327], 99.95th=[31327], 00:13:20.153 | 99.99th=[31327] 00:13:20.153 write: IOPS=4335, BW=16.9MiB/s (17.8MB/s)(17.0MiB/1004msec); 0 zone resets 00:13:20.153 slat (usec): min=4, max=7302, avg=111.01, stdev=543.12 00:13:20.153 clat (usec): min=447, max=39000, avg=14761.07, stdev=4341.33 00:13:20.153 lat (usec): min=3325, max=39008, avg=14872.08, stdev=4344.01 00:13:20.153 clat percentiles (usec): 00:13:20.153 | 1.00th=[ 7832], 5.00th=[10945], 10.00th=[11731], 20.00th=[12518], 00:13:20.153 | 30.00th=[12911], 40.00th=[13304], 50.00th=[14353], 60.00th=[14746], 00:13:20.153 | 70.00th=[15008], 80.00th=[15664], 90.00th=[17695], 95.00th=[24773], 00:13:20.154 | 99.00th=[35914], 99.50th=[36963], 99.90th=[38011], 99.95th=[39060], 00:13:20.154 | 99.99th=[39060] 00:13:20.154 bw ( KiB/s): min=16384, max=17416, per=24.41%, avg=16900.00, stdev=729.73, samples=2 00:13:20.154 iops : min= 4096, max= 4354, avg=4225.00, stdev=182.43, samples=2 00:13:20.154 lat (usec) : 500=0.01% 00:13:20.154 lat (msec) : 4=0.36%, 10=2.14%, 20=91.22%, 50=6.27% 00:13:20.154 cpu : usr=4.69%, sys=8.47%, ctx=492, majf=0, minf=13 00:13:20.154 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:20.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:20.154 issued rwts: total=4096,4353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:20.154 job3: (groupid=0, jobs=1): err= 0: pid=1677543: Thu Apr 18 17:00:35 2024 00:13:20.154 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:13:20.154 slat (usec): min=3, max=10096, avg=110.50, stdev=601.80 00:13:20.154 clat (usec): min=8668, max=32566, avg=15138.50, stdev=2781.02 00:13:20.154 lat (usec): min=8674, max=32632, avg=15249.01, stdev=2780.21 00:13:20.154 clat percentiles (usec): 00:13:20.154 | 1.00th=[ 9896], 5.00th=[11600], 10.00th=[12125], 20.00th=[13042], 00:13:20.154 | 30.00th=[13566], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:13:20.154 | 70.00th=[15664], 80.00th=[16188], 90.00th=[18220], 95.00th=[21365], 00:13:20.154 | 99.00th=[26608], 99.50th=[26608], 99.90th=[26608], 99.95th=[27919], 00:13:20.154 | 99.99th=[32637] 00:13:20.154 write: IOPS=4591, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:13:20.154 slat (usec): min=4, max=9176, avg=110.08, stdev=622.53 00:13:20.154 clat (usec): min=498, max=25205, avg=14055.84, stdev=2060.51 00:13:20.154 lat (usec): min=3467, max=28893, avg=14165.91, stdev=2106.39 00:13:20.154 clat percentiles (usec): 00:13:20.154 | 1.00th=[ 7701], 5.00th=[11994], 10.00th=[12518], 20.00th=[12780], 00:13:20.154 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13566], 60.00th=[14222], 00:13:20.154 | 70.00th=[14746], 80.00th=[15926], 90.00th=[16319], 95.00th=[16712], 00:13:20.154 | 99.00th=[21103], 99.50th=[21103], 99.90th=[23987], 99.95th=[23987], 00:13:20.154 | 99.99th=[25297] 00:13:20.154 bw ( KiB/s): min=17808, max=17976, per=25.84%, avg=17892.00, stdev=118.79, samples=2 00:13:20.154 iops : min= 4452, max= 4494, avg=4473.00, stdev=29.70, samples=2 00:13:20.154 lat (usec) : 500=0.01% 00:13:20.154 lat (msec) : 4=0.21%, 10=1.45%, 20=94.88%, 50=3.45% 00:13:20.154 cpu : usr=4.80%, sys=8.79%, ctx=366, majf=0, minf=13 00:13:20.154 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:20.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:20.154 issued rwts: total=4096,4601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:20.154 00:13:20.154 Run status group 0 (all jobs): 00:13:20.154 READ: bw=62.6MiB/s (65.7MB/s), 15.9MiB/s-16.6MiB/s (16.7MB/s-17.4MB/s), io=65.3MiB (68.4MB), run=1002-1042msec 00:13:20.154 WRITE: bw=67.6MiB/s (70.9MB/s), 16.9MiB/s-17.9MiB/s (17.8MB/s-18.8MB/s), io=70.4MiB (73.9MB), run=1002-1042msec 00:13:20.154 00:13:20.154 Disk stats (read/write): 00:13:20.154 nvme0n1: ios=3485/3584, merge=0/0, ticks=16093/14848, in_queue=30941, util=86.97% 00:13:20.154 nvme0n2: ios=3612/3896, merge=0/0, ticks=23616/21537, in_queue=45153, util=94.11% 00:13:20.154 nvme0n3: ios=3620/3584, merge=0/0, ticks=15556/13323, in_queue=28879, util=96.88% 00:13:20.154 nvme0n4: ios=3635/3820, merge=0/0, ticks=18432/17360, in_queue=35792, util=98.64% 00:13:20.154 17:00:35 -- target/fio.sh@55 -- # sync 00:13:20.154 17:00:35 -- target/fio.sh@59 -- # fio_pid=1677682 00:13:20.154 17:00:35 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:20.154 17:00:35 -- target/fio.sh@61 -- # sleep 3 00:13:20.154 [global] 00:13:20.154 thread=1 00:13:20.154 invalidate=1 00:13:20.154 rw=read 00:13:20.154 time_based=1 00:13:20.154 runtime=10 00:13:20.154 ioengine=libaio 00:13:20.154 direct=1 00:13:20.154 bs=4096 00:13:20.154 iodepth=1 00:13:20.154 norandommap=1 00:13:20.154 numjobs=1 00:13:20.154 00:13:20.154 [job0] 00:13:20.154 filename=/dev/nvme0n1 00:13:20.154 [job1] 00:13:20.154 filename=/dev/nvme0n2 00:13:20.154 [job2] 00:13:20.154 filename=/dev/nvme0n3 00:13:20.154 [job3] 00:13:20.154 filename=/dev/nvme0n4 00:13:20.154 Could not set queue depth (nvme0n1) 00:13:20.154 Could not set queue depth (nvme0n2) 00:13:20.154 Could not set queue depth (nvme0n3) 00:13:20.154 Could not set queue depth (nvme0n4) 00:13:20.154 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.154 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.154 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.154 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.154 fio-3.35 00:13:20.154 Starting 4 threads 00:13:23.431 17:00:38 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:23.431 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8364032, buflen=4096 00:13:23.431 fio: pid=1677894, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:23.431 17:00:38 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:23.431 17:00:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:23.431 17:00:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:23.431 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=638976, buflen=4096 00:13:23.431 fio: pid=1677893, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:23.688 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=393216, buflen=4096 00:13:23.688 fio: pid=1677861, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:23.688 17:00:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:23.688 17:00:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:23.947 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=34353152, buflen=4096 00:13:23.947 fio: pid=1677881, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:13:23.947 17:00:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:23.947 17:00:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:23.947 00:13:23.947 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1677861: Thu Apr 18 17:00:39 2024 00:13:23.947 read: IOPS=28, BW=114KiB/s (116kB/s)(384KiB/3380msec) 00:13:23.947 slat (usec): min=7, max=8905, avg=137.83, stdev=945.31 00:13:23.947 clat (usec): min=353, max=41175, avg=35054.13, stdev=14386.85 00:13:23.947 lat (usec): min=362, max=50081, avg=35193.05, stdev=14472.23 00:13:23.947 clat percentiles (usec): 00:13:23.947 | 1.00th=[ 355], 5.00th=[ 359], 10.00th=[ 441], 20.00th=[41157], 00:13:23.947 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:23.947 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:23.947 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:23.947 | 99.99th=[41157] 00:13:23.947 bw ( KiB/s): min= 96, max= 208, per=0.97%, avg=114.67, stdev=45.72, samples=6 00:13:23.947 iops : min= 24, max= 52, avg=28.67, stdev=11.43, samples=6 00:13:23.947 lat (usec) : 500=13.40%, 750=1.03% 00:13:23.947 lat (msec) : 50=84.54% 00:13:23.947 cpu : usr=0.06%, sys=0.00%, ctx=99, majf=0, minf=1 00:13:23.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.947 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.947 issued rwts: total=97,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.947 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1677881: Thu Apr 18 17:00:39 2024 00:13:23.947 read: IOPS=2310, BW=9239KiB/s (9461kB/s)(32.8MiB/3631msec) 00:13:23.947 slat (usec): min=5, max=16076, avg=15.52, stdev=253.41 00:13:23.947 clat (usec): min=276, max=47998, avg=414.55, stdev=1847.91 00:13:23.947 lat (usec): min=281, max=48997, avg=430.06, stdev=1890.07 00:13:23.947 clat percentiles (usec): 00:13:23.947 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:13:23.947 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:13:23.947 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 375], 00:13:23.947 | 99.00th=[ 429], 99.50th=[ 494], 99.90th=[41157], 99.95th=[41157], 00:13:23.947 | 99.99th=[47973] 00:13:23.947 bw ( KiB/s): min= 120, max=12392, per=81.42%, avg=9580.57, stdev=4265.12, samples=7 00:13:23.947 iops : min= 30, max= 3098, avg=2395.14, stdev=1066.28, samples=7 00:13:23.947 lat (usec) : 500=99.52%, 750=0.23% 00:13:23.947 lat (msec) : 2=0.04%, 50=0.20% 00:13:23.947 cpu : usr=1.60%, sys=4.02%, ctx=8392, majf=0, minf=1 00:13:23.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.947 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.947 issued rwts: total=8388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.947 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1677893: Thu Apr 18 17:00:39 2024 00:13:23.947 read: IOPS=50, BW=199KiB/s (204kB/s)(624KiB/3128msec) 00:13:23.947 slat (usec): min=5, max=10902, avg=143.33, stdev=1158.73 00:13:23.947 clat (usec): min=290, max=43949, avg=19897.13, stdev=20386.40 00:13:23.947 lat (usec): min=296, max=43971, avg=20041.28, stdev=20297.88 00:13:23.947 clat percentiles (usec): 00:13:23.947 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 310], 00:13:23.947 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 416], 60.00th=[41157], 00:13:23.947 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:23.947 | 99.00th=[41157], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:13:23.947 | 99.99th=[43779] 00:13:23.947 bw ( KiB/s): min= 96, max= 728, per=1.72%, avg=202.67, stdev=257.38, samples=6 00:13:23.947 iops : min= 24, max= 182, avg=50.67, stdev=64.34, samples=6 00:13:23.947 lat (usec) : 500=50.32%, 750=0.64% 00:13:23.947 lat (msec) : 4=0.64%, 50=47.77% 00:13:23.947 cpu : usr=0.00%, sys=0.13%, ctx=159, majf=0, minf=1 00:13:23.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.947 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.947 issued rwts: total=157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.947 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1677894: Thu Apr 18 17:00:39 2024 00:13:23.947 read: IOPS=712, BW=2849KiB/s (2917kB/s)(8168KiB/2867msec) 00:13:23.947 slat (nsec): min=4324, max=64983, avg=14182.03, stdev=8902.73 00:13:23.947 clat (usec): min=249, max=41320, avg=1385.93, stdev=6402.29 00:13:23.947 lat (usec): min=262, max=41354, avg=1400.10, stdev=6403.15 00:13:23.947 clat percentiles (usec): 00:13:23.947 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 00:13:23.947 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 371], 00:13:23.947 | 70.00th=[ 388], 80.00th=[ 392], 90.00th=[ 465], 95.00th=[ 502], 00:13:23.947 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:23.947 | 99.99th=[41157] 00:13:23.947 bw ( KiB/s): min= 96, max=10696, per=27.64%, avg=3252.80, stdev=4346.85, samples=5 00:13:23.947 iops : min= 24, max= 2674, avg=813.20, stdev=1086.71, samples=5 00:13:23.947 lat (usec) : 250=0.05%, 500=94.96%, 750=2.40% 00:13:23.947 lat (msec) : 50=2.55% 00:13:23.948 cpu : usr=0.28%, sys=1.29%, ctx=2043, majf=0, minf=1 00:13:23.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.948 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.948 issued rwts: total=2043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.948 00:13:23.948 Run status group 0 (all jobs): 00:13:23.948 READ: bw=11.5MiB/s (12.0MB/s), 114KiB/s-9239KiB/s (116kB/s-9461kB/s), io=41.7MiB (43.7MB), run=2867-3631msec 00:13:23.948 00:13:23.948 Disk stats (read/write): 00:13:23.948 nvme0n1: ios=95/0, merge=0/0, ticks=3325/0, in_queue=3325, util=95.57% 00:13:23.948 nvme0n2: ios=8386/0, merge=0/0, ticks=3348/0, in_queue=3348, util=95.47% 00:13:23.948 nvme0n3: ios=155/0, merge=0/0, ticks=3063/0, in_queue=3063, util=96.16% 00:13:23.948 nvme0n4: ios=2041/0, merge=0/0, ticks=2779/0, in_queue=2779, util=96.74% 00:13:24.206 17:00:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:24.206 17:00:39 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:24.464 17:00:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:24.464 17:00:40 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:24.721 17:00:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:24.721 17:00:40 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:24.979 17:00:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:24.979 17:00:40 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:25.237 17:00:40 -- target/fio.sh@69 -- # fio_status=0 00:13:25.237 17:00:40 -- target/fio.sh@70 -- # wait 1677682 00:13:25.237 17:00:40 -- target/fio.sh@70 -- # fio_status=4 00:13:25.237 17:00:40 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.500 17:00:40 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.500 17:00:40 -- common/autotest_common.sh@1205 -- # local i=0 00:13:25.500 17:00:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:25.500 17:00:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.500 17:00:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:25.500 17:00:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.500 17:00:40 -- common/autotest_common.sh@1217 -- # return 0 00:13:25.500 17:00:40 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:25.500 17:00:40 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:25.500 nvmf hotplug test: fio failed as expected 00:13:25.500 17:00:40 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.832 17:00:41 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:25.832 17:00:41 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:25.832 17:00:41 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:25.832 17:00:41 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:25.832 17:00:41 -- target/fio.sh@91 -- # nvmftestfini 00:13:25.832 17:00:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:25.832 17:00:41 -- nvmf/common.sh@117 -- # sync 00:13:25.832 17:00:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:25.832 17:00:41 -- nvmf/common.sh@120 -- # set +e 00:13:25.832 17:00:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.832 17:00:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:25.832 rmmod nvme_tcp 00:13:25.832 rmmod nvme_fabrics 00:13:25.832 rmmod nvme_keyring 00:13:25.832 17:00:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.832 17:00:41 -- nvmf/common.sh@124 -- # set -e 00:13:25.832 17:00:41 -- nvmf/common.sh@125 -- # return 0 00:13:25.832 17:00:41 -- nvmf/common.sh@478 -- # '[' -n 1675645 ']' 00:13:25.832 17:00:41 -- nvmf/common.sh@479 -- # killprocess 1675645 00:13:25.832 17:00:41 -- common/autotest_common.sh@936 -- # '[' -z 1675645 ']' 00:13:25.832 17:00:41 -- common/autotest_common.sh@940 -- # kill -0 1675645 00:13:25.832 17:00:41 -- common/autotest_common.sh@941 -- # uname 00:13:25.832 17:00:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:25.832 17:00:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1675645 00:13:25.832 17:00:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:25.832 17:00:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:25.832 17:00:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1675645' 00:13:25.832 killing process with pid 1675645 00:13:25.832 17:00:41 -- common/autotest_common.sh@955 -- # kill 1675645 00:13:25.832 17:00:41 -- common/autotest_common.sh@960 -- # wait 1675645 00:13:26.092 17:00:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:26.092 17:00:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:26.092 17:00:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:26.092 17:00:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.092 17:00:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:26.092 17:00:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.092 17:00:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.092 17:00:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.996 17:00:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:27.996 00:13:27.996 real 0m23.739s 00:13:27.996 user 1m23.091s 00:13:27.996 sys 0m6.897s 00:13:27.996 17:00:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:27.996 17:00:43 -- common/autotest_common.sh@10 -- # set +x 00:13:27.996 ************************************ 00:13:27.996 END TEST nvmf_fio_target 00:13:27.996 ************************************ 00:13:27.996 17:00:43 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:27.996 17:00:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:27.996 17:00:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:27.996 17:00:43 -- common/autotest_common.sh@10 -- # set +x 00:13:28.256 ************************************ 00:13:28.256 START TEST nvmf_bdevio 00:13:28.256 ************************************ 00:13:28.256 17:00:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:28.256 * Looking for test storage... 00:13:28.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.256 17:00:43 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.256 17:00:43 -- nvmf/common.sh@7 -- # uname -s 00:13:28.256 17:00:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.256 17:00:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.256 17:00:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.256 17:00:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.256 17:00:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.256 17:00:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.256 17:00:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.256 17:00:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.256 17:00:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.256 17:00:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.256 17:00:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:28.256 17:00:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:28.256 17:00:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.256 17:00:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.256 17:00:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.256 17:00:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.256 17:00:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.256 17:00:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.256 17:00:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.256 17:00:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.256 17:00:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.256 17:00:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.256 17:00:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.256 17:00:43 -- paths/export.sh@5 -- # export PATH 00:13:28.256 17:00:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.256 17:00:43 -- nvmf/common.sh@47 -- # : 0 00:13:28.256 17:00:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.256 17:00:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.256 17:00:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.256 17:00:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.256 17:00:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.256 17:00:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.256 17:00:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.256 17:00:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.256 17:00:43 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:28.256 17:00:43 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:28.256 17:00:43 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:28.256 17:00:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:28.256 17:00:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.256 17:00:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:28.256 17:00:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:28.256 17:00:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:28.256 17:00:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.256 17:00:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.256 17:00:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.256 17:00:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:28.256 17:00:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:28.257 17:00:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:28.257 17:00:43 -- common/autotest_common.sh@10 -- # set +x 00:13:30.164 17:00:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:30.164 17:00:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.164 17:00:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.164 17:00:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.164 17:00:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.164 17:00:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.164 17:00:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.164 17:00:45 -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.164 17:00:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.164 17:00:45 -- nvmf/common.sh@296 -- # e810=() 00:13:30.164 17:00:45 -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.164 17:00:45 -- nvmf/common.sh@297 -- # x722=() 00:13:30.164 17:00:45 -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.164 17:00:45 -- nvmf/common.sh@298 -- # mlx=() 00:13:30.164 17:00:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.164 17:00:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.164 17:00:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.164 17:00:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.164 17:00:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.164 17:00:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.164 17:00:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.164 17:00:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.164 17:00:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.164 17:00:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.165 17:00:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.165 17:00:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.165 17:00:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.165 17:00:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.165 17:00:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.165 17:00:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.165 17:00:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:30.165 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:30.165 17:00:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.165 17:00:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:30.165 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:30.165 17:00:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.165 17:00:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.165 17:00:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.165 17:00:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:30.165 17:00:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.165 17:00:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:30.165 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:30.165 17:00:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.165 17:00:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.165 17:00:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.165 17:00:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:30.165 17:00:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.165 17:00:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:30.165 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:30.165 17:00:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.165 17:00:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:30.165 17:00:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:30.165 17:00:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:30.165 17:00:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.165 17:00:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.165 17:00:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.165 17:00:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.165 17:00:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.165 17:00:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.165 17:00:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.165 17:00:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.165 17:00:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.165 17:00:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.165 17:00:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.165 17:00:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.165 17:00:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.165 17:00:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.165 17:00:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.165 17:00:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.165 17:00:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.165 17:00:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.165 17:00:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.165 17:00:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:13:30.165 00:13:30.165 --- 10.0.0.2 ping statistics --- 00:13:30.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.165 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:13:30.165 17:00:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:13:30.165 00:13:30.165 --- 10.0.0.1 ping statistics --- 00:13:30.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.165 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:13:30.165 17:00:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.165 17:00:45 -- nvmf/common.sh@411 -- # return 0 00:13:30.165 17:00:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:30.165 17:00:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.165 17:00:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:30.165 17:00:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.165 17:00:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:30.165 17:00:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:30.425 17:00:45 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:30.425 17:00:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:30.425 17:00:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:30.426 17:00:45 -- common/autotest_common.sh@10 -- # set +x 00:13:30.426 17:00:45 -- nvmf/common.sh@470 -- # nvmfpid=1680437 00:13:30.426 17:00:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:30.426 17:00:45 -- nvmf/common.sh@471 -- # waitforlisten 1680437 00:13:30.426 17:00:45 -- common/autotest_common.sh@817 -- # '[' -z 1680437 ']' 00:13:30.426 17:00:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.426 17:00:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:30.426 17:00:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.426 17:00:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:30.426 17:00:45 -- common/autotest_common.sh@10 -- # set +x 00:13:30.426 [2024-04-18 17:00:45.929316] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:13:30.426 [2024-04-18 17:00:45.929411] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.426 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.426 [2024-04-18 17:00:46.004606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.426 [2024-04-18 17:00:46.129752] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.426 [2024-04-18 17:00:46.129817] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.426 [2024-04-18 17:00:46.129834] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.426 [2024-04-18 17:00:46.129847] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.426 [2024-04-18 17:00:46.129859] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.426 [2024-04-18 17:00:46.129949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:30.426 [2024-04-18 17:00:46.130005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:30.426 [2024-04-18 17:00:46.130056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:30.426 [2024-04-18 17:00:46.130060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.684 17:00:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:30.684 17:00:46 -- common/autotest_common.sh@850 -- # return 0 00:13:30.684 17:00:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:30.684 17:00:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:30.684 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:30.684 17:00:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.684 17:00:46 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.684 17:00:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.684 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:30.684 [2024-04-18 17:00:46.289208] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.684 17:00:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.684 17:00:46 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:30.684 17:00:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.684 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:30.684 Malloc0 00:13:30.684 17:00:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.684 17:00:46 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:30.684 17:00:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.684 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:30.684 17:00:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.685 17:00:46 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:30.685 17:00:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.685 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:30.685 17:00:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.685 17:00:46 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.685 17:00:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.685 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:30.685 [2024-04-18 17:00:46.342817] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.685 17:00:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.685 17:00:46 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:30.685 17:00:46 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:30.685 17:00:46 -- nvmf/common.sh@521 -- # config=() 00:13:30.685 17:00:46 -- nvmf/common.sh@521 -- # local subsystem config 00:13:30.685 17:00:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:30.685 17:00:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:30.685 { 00:13:30.685 "params": { 00:13:30.685 "name": "Nvme$subsystem", 00:13:30.685 "trtype": "$TEST_TRANSPORT", 00:13:30.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:30.685 "adrfam": "ipv4", 00:13:30.685 "trsvcid": "$NVMF_PORT", 00:13:30.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:30.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:30.685 "hdgst": ${hdgst:-false}, 00:13:30.685 "ddgst": ${ddgst:-false} 00:13:30.685 }, 00:13:30.685 "method": "bdev_nvme_attach_controller" 00:13:30.685 } 00:13:30.685 EOF 00:13:30.685 )") 00:13:30.685 17:00:46 -- nvmf/common.sh@543 -- # cat 00:13:30.685 17:00:46 -- nvmf/common.sh@545 -- # jq . 00:13:30.685 17:00:46 -- nvmf/common.sh@546 -- # IFS=, 00:13:30.685 17:00:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:30.685 "params": { 00:13:30.685 "name": "Nvme1", 00:13:30.685 "trtype": "tcp", 00:13:30.685 "traddr": "10.0.0.2", 00:13:30.685 "adrfam": "ipv4", 00:13:30.685 "trsvcid": "4420", 00:13:30.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:30.685 "hdgst": false, 00:13:30.685 "ddgst": false 00:13:30.685 }, 00:13:30.685 "method": "bdev_nvme_attach_controller" 00:13:30.685 }' 00:13:30.685 [2024-04-18 17:00:46.388758] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:13:30.685 [2024-04-18 17:00:46.388842] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1680553 ] 00:13:30.943 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.943 [2024-04-18 17:00:46.449548] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:30.943 [2024-04-18 17:00:46.562006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.943 [2024-04-18 17:00:46.562054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.943 [2024-04-18 17:00:46.562057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.202 I/O targets: 00:13:31.202 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:31.202 00:13:31.202 00:13:31.202 CUnit - A unit testing framework for C - Version 2.1-3 00:13:31.203 http://cunit.sourceforge.net/ 00:13:31.203 00:13:31.203 00:13:31.203 Suite: bdevio tests on: Nvme1n1 00:13:31.462 Test: blockdev write read block ...passed 00:13:31.462 Test: blockdev write zeroes read block ...passed 00:13:31.462 Test: blockdev write zeroes read no split ...passed 00:13:31.462 Test: blockdev write zeroes read split ...passed 00:13:31.462 Test: blockdev write zeroes read split partial ...passed 00:13:31.462 Test: blockdev reset ...[2024-04-18 17:00:47.094313] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:31.462 [2024-04-18 17:00:47.094441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbff60 (9): Bad file descriptor 00:13:31.462 [2024-04-18 17:00:47.155256] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:31.462 passed 00:13:31.462 Test: blockdev write read 8 blocks ...passed 00:13:31.462 Test: blockdev write read size > 128k ...passed 00:13:31.462 Test: blockdev write read invalid size ...passed 00:13:31.723 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.723 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.723 Test: blockdev write read max offset ...passed 00:13:31.723 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.723 Test: blockdev writev readv 8 blocks ...passed 00:13:31.723 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.723 Test: blockdev writev readv block ...passed 00:13:31.723 Test: blockdev writev readv size > 128k ...passed 00:13:31.723 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.723 Test: blockdev comparev and writev ...[2024-04-18 17:00:47.372572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.723 [2024-04-18 17:00:47.372618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:31.723 [2024-04-18 17:00:47.372643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.723 [2024-04-18 17:00:47.372660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:31.723 [2024-04-18 17:00:47.373053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.723 [2024-04-18 17:00:47.373078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:31.723 [2024-04-18 17:00:47.373099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.723 [2024-04-18 17:00:47.373116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:31.723 [2024-04-18 17:00:47.373501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.723 [2024-04-18 17:00:47.373525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:31.723 [2024-04-18 17:00:47.373547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.723 [2024-04-18 17:00:47.373564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:31.723 [2024-04-18 17:00:47.373918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.723 [2024-04-18 17:00:47.373943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:31.723 [2024-04-18 17:00:47.373965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.723 [2024-04-18 17:00:47.373982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:31.723 passed 00:13:31.982 Test: blockdev nvme passthru rw ...passed 00:13:31.982 Test: blockdev nvme passthru vendor specific ...[2024-04-18 17:00:47.456682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:31.982 [2024-04-18 17:00:47.456709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:31.982 [2024-04-18 17:00:47.456873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:31.982 [2024-04-18 17:00:47.456896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:31.982 [2024-04-18 17:00:47.457058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:31.982 [2024-04-18 17:00:47.457080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:31.982 [2024-04-18 17:00:47.457238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:31.982 [2024-04-18 17:00:47.457260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:31.982 passed 00:13:31.982 Test: blockdev nvme admin passthru ...passed 00:13:31.982 Test: blockdev copy ...passed 00:13:31.982 00:13:31.982 Run Summary: Type Total Ran Passed Failed Inactive 00:13:31.982 suites 1 1 n/a 0 0 00:13:31.982 tests 23 23 23 0 0 00:13:31.982 asserts 152 152 152 0 n/a 00:13:31.982 00:13:31.982 Elapsed time = 1.238 seconds 00:13:32.242 17:00:47 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.242 17:00:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.242 17:00:47 -- common/autotest_common.sh@10 -- # set +x 00:13:32.242 17:00:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.242 17:00:47 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:32.242 17:00:47 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:32.242 17:00:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:32.242 17:00:47 -- nvmf/common.sh@117 -- # sync 00:13:32.242 17:00:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.242 17:00:47 -- nvmf/common.sh@120 -- # set +e 00:13:32.242 17:00:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.242 17:00:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.242 rmmod nvme_tcp 00:13:32.242 rmmod nvme_fabrics 00:13:32.242 rmmod nvme_keyring 00:13:32.242 17:00:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.242 17:00:47 -- nvmf/common.sh@124 -- # set -e 00:13:32.242 17:00:47 -- nvmf/common.sh@125 -- # return 0 00:13:32.242 17:00:47 -- nvmf/common.sh@478 -- # '[' -n 1680437 ']' 00:13:32.242 17:00:47 -- nvmf/common.sh@479 -- # killprocess 1680437 00:13:32.242 17:00:47 -- common/autotest_common.sh@936 -- # '[' -z 1680437 ']' 00:13:32.242 17:00:47 -- common/autotest_common.sh@940 -- # kill -0 1680437 00:13:32.242 17:00:47 -- common/autotest_common.sh@941 -- # uname 00:13:32.242 17:00:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:32.242 17:00:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1680437 00:13:32.242 17:00:47 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:32.242 17:00:47 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:32.242 17:00:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1680437' 00:13:32.242 killing process with pid 1680437 00:13:32.242 17:00:47 -- common/autotest_common.sh@955 -- # kill 1680437 00:13:32.242 17:00:47 -- common/autotest_common.sh@960 -- # wait 1680437 00:13:32.501 17:00:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:32.501 17:00:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:32.501 17:00:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:32.501 17:00:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.501 17:00:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:32.501 17:00:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.501 17:00:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.501 17:00:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.045 17:00:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:35.045 00:13:35.045 real 0m6.441s 00:13:35.045 user 0m11.288s 00:13:35.045 sys 0m2.027s 00:13:35.045 17:00:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:35.045 17:00:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.045 ************************************ 00:13:35.045 END TEST nvmf_bdevio 00:13:35.045 ************************************ 00:13:35.045 17:00:50 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:13:35.045 17:00:50 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:35.045 17:00:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:35.045 17:00:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:35.045 17:00:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.045 ************************************ 00:13:35.045 START TEST nvmf_bdevio_no_huge 00:13:35.045 ************************************ 00:13:35.045 17:00:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:35.045 * Looking for test storage... 00:13:35.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.045 17:00:50 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.045 17:00:50 -- nvmf/common.sh@7 -- # uname -s 00:13:35.045 17:00:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.045 17:00:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.045 17:00:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.045 17:00:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.046 17:00:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.046 17:00:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.046 17:00:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.046 17:00:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.046 17:00:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.046 17:00:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.046 17:00:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:35.046 17:00:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:35.046 17:00:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.046 17:00:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.046 17:00:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.046 17:00:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.046 17:00:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.046 17:00:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.046 17:00:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.046 17:00:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.046 17:00:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.046 17:00:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.046 17:00:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.046 17:00:50 -- paths/export.sh@5 -- # export PATH 00:13:35.046 17:00:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.046 17:00:50 -- nvmf/common.sh@47 -- # : 0 00:13:35.046 17:00:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.046 17:00:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.046 17:00:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.046 17:00:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.046 17:00:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.046 17:00:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.046 17:00:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.046 17:00:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.046 17:00:50 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.046 17:00:50 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:35.046 17:00:50 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:35.046 17:00:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:35.046 17:00:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.046 17:00:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:35.046 17:00:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:35.046 17:00:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:35.046 17:00:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.046 17:00:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.046 17:00:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.046 17:00:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:35.046 17:00:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:35.046 17:00:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:35.046 17:00:50 -- common/autotest_common.sh@10 -- # set +x 00:13:36.952 17:00:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:36.952 17:00:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:36.952 17:00:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:36.952 17:00:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:36.952 17:00:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:36.952 17:00:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:36.952 17:00:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:36.952 17:00:52 -- nvmf/common.sh@295 -- # net_devs=() 00:13:36.952 17:00:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:36.952 17:00:52 -- nvmf/common.sh@296 -- # e810=() 00:13:36.952 17:00:52 -- nvmf/common.sh@296 -- # local -ga e810 00:13:36.952 17:00:52 -- nvmf/common.sh@297 -- # x722=() 00:13:36.952 17:00:52 -- nvmf/common.sh@297 -- # local -ga x722 00:13:36.952 17:00:52 -- nvmf/common.sh@298 -- # mlx=() 00:13:36.952 17:00:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:36.952 17:00:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.952 17:00:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.952 17:00:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.952 17:00:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.952 17:00:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.952 17:00:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.953 17:00:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.953 17:00:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.953 17:00:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.953 17:00:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.953 17:00:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.953 17:00:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:36.953 17:00:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:36.953 17:00:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:36.953 17:00:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.953 17:00:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:36.953 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:36.953 17:00:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.953 17:00:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:36.953 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:36.953 17:00:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:36.953 17:00:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.953 17:00:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.953 17:00:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:36.953 17:00:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.953 17:00:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:36.953 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:36.953 17:00:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.953 17:00:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.953 17:00:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.953 17:00:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:36.953 17:00:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.953 17:00:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:36.953 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:36.953 17:00:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.953 17:00:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:36.953 17:00:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:36.953 17:00:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:36.953 17:00:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.953 17:00:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.953 17:00:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.953 17:00:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:36.953 17:00:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.953 17:00:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.953 17:00:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:36.953 17:00:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.953 17:00:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.953 17:00:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:36.953 17:00:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:36.953 17:00:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.953 17:00:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.953 17:00:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.953 17:00:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.953 17:00:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:36.953 17:00:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.953 17:00:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.953 17:00:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.953 17:00:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:36.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:13:36.953 00:13:36.953 --- 10.0.0.2 ping statistics --- 00:13:36.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.953 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:13:36.953 17:00:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:13:36.953 00:13:36.953 --- 10.0.0.1 ping statistics --- 00:13:36.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.953 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:13:36.953 17:00:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.953 17:00:52 -- nvmf/common.sh@411 -- # return 0 00:13:36.953 17:00:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:36.953 17:00:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.953 17:00:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:36.953 17:00:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.953 17:00:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:36.953 17:00:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:36.953 17:00:52 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:36.953 17:00:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:36.953 17:00:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:36.953 17:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:36.953 17:00:52 -- nvmf/common.sh@470 -- # nvmfpid=1682634 00:13:36.953 17:00:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:36.953 17:00:52 -- nvmf/common.sh@471 -- # waitforlisten 1682634 00:13:36.953 17:00:52 -- common/autotest_common.sh@817 -- # '[' -z 1682634 ']' 00:13:36.953 17:00:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.953 17:00:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:36.953 17:00:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.953 17:00:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:36.953 17:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:36.953 [2024-04-18 17:00:52.450809] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:13:36.953 [2024-04-18 17:00:52.450895] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:36.953 [2024-04-18 17:00:52.524840] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.953 [2024-04-18 17:00:52.633901] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.953 [2024-04-18 17:00:52.633971] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.953 [2024-04-18 17:00:52.633986] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.953 [2024-04-18 17:00:52.633998] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.953 [2024-04-18 17:00:52.634009] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.953 [2024-04-18 17:00:52.634099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:36.953 [2024-04-18 17:00:52.634164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:36.953 [2024-04-18 17:00:52.634255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:36.953 [2024-04-18 17:00:52.634258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.211 17:00:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:37.211 17:00:52 -- common/autotest_common.sh@850 -- # return 0 00:13:37.211 17:00:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:37.211 17:00:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:37.211 17:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:37.211 17:00:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.211 17:00:52 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:37.211 17:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.211 17:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:37.211 [2024-04-18 17:00:52.756889] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.211 17:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.211 17:00:52 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:37.211 17:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.211 17:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:37.211 Malloc0 00:13:37.211 17:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.211 17:00:52 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:37.211 17:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.211 17:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:37.211 17:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.211 17:00:52 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:37.211 17:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.211 17:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:37.211 17:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.211 17:00:52 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.211 17:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.211 17:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:37.211 [2024-04-18 17:00:52.794795] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.211 17:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.211 17:00:52 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:37.211 17:00:52 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:37.211 17:00:52 -- nvmf/common.sh@521 -- # config=() 00:13:37.211 17:00:52 -- nvmf/common.sh@521 -- # local subsystem config 00:13:37.211 17:00:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:37.211 17:00:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:37.211 { 00:13:37.211 "params": { 00:13:37.211 "name": "Nvme$subsystem", 00:13:37.211 "trtype": "$TEST_TRANSPORT", 00:13:37.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:37.211 "adrfam": "ipv4", 00:13:37.211 "trsvcid": "$NVMF_PORT", 00:13:37.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:37.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:37.211 "hdgst": ${hdgst:-false}, 00:13:37.211 "ddgst": ${ddgst:-false} 00:13:37.211 }, 00:13:37.211 "method": "bdev_nvme_attach_controller" 00:13:37.211 } 00:13:37.211 EOF 00:13:37.211 )") 00:13:37.211 17:00:52 -- nvmf/common.sh@543 -- # cat 00:13:37.211 17:00:52 -- nvmf/common.sh@545 -- # jq . 00:13:37.211 17:00:52 -- nvmf/common.sh@546 -- # IFS=, 00:13:37.211 17:00:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:37.211 "params": { 00:13:37.211 "name": "Nvme1", 00:13:37.211 "trtype": "tcp", 00:13:37.211 "traddr": "10.0.0.2", 00:13:37.211 "adrfam": "ipv4", 00:13:37.211 "trsvcid": "4420", 00:13:37.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:37.211 "hdgst": false, 00:13:37.211 "ddgst": false 00:13:37.211 }, 00:13:37.211 "method": "bdev_nvme_attach_controller" 00:13:37.211 }' 00:13:37.211 [2024-04-18 17:00:52.836236] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:13:37.211 [2024-04-18 17:00:52.836329] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1682783 ] 00:13:37.211 [2024-04-18 17:00:52.900420] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:37.470 [2024-04-18 17:00:53.010888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.470 [2024-04-18 17:00:53.010937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.470 [2024-04-18 17:00:53.010940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.728 I/O targets: 00:13:37.728 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:37.728 00:13:37.728 00:13:37.728 CUnit - A unit testing framework for C - Version 2.1-3 00:13:37.728 http://cunit.sourceforge.net/ 00:13:37.728 00:13:37.728 00:13:37.728 Suite: bdevio tests on: Nvme1n1 00:13:37.728 Test: blockdev write read block ...passed 00:13:37.728 Test: blockdev write zeroes read block ...passed 00:13:37.728 Test: blockdev write zeroes read no split ...passed 00:13:37.728 Test: blockdev write zeroes read split ...passed 00:13:37.728 Test: blockdev write zeroes read split partial ...passed 00:13:37.728 Test: blockdev reset ...[2024-04-18 17:00:53.417791] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:37.728 [2024-04-18 17:00:53.417897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193a5c0 (9): Bad file descriptor 00:13:37.987 [2024-04-18 17:00:53.519545] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:37.987 passed 00:13:37.987 Test: blockdev write read 8 blocks ...passed 00:13:37.987 Test: blockdev write read size > 128k ...passed 00:13:37.987 Test: blockdev write read invalid size ...passed 00:13:37.987 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:37.987 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:37.987 Test: blockdev write read max offset ...passed 00:13:37.987 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:38.248 Test: blockdev writev readv 8 blocks ...passed 00:13:38.248 Test: blockdev writev readv 30 x 1block ...passed 00:13:38.248 Test: blockdev writev readv block ...passed 00:13:38.248 Test: blockdev writev readv size > 128k ...passed 00:13:38.248 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:38.248 Test: blockdev comparev and writev ...[2024-04-18 17:00:53.771939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.248 [2024-04-18 17:00:53.771977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:38.248 [2024-04-18 17:00:53.772002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.248 [2024-04-18 17:00:53.772022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:38.248 [2024-04-18 17:00:53.772400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.248 [2024-04-18 17:00:53.772436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:38.248 [2024-04-18 17:00:53.772458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.248 [2024-04-18 17:00:53.772474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:38.248 [2024-04-18 17:00:53.772805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.248 [2024-04-18 17:00:53.772829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:38.248 [2024-04-18 17:00:53.772850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.248 [2024-04-18 17:00:53.772868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:38.248 [2024-04-18 17:00:53.773218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.248 [2024-04-18 17:00:53.773244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:38.248 [2024-04-18 17:00:53.773266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.248 [2024-04-18 17:00:53.773284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:38.248 passed 00:13:38.248 Test: blockdev nvme passthru rw ...passed 00:13:38.248 Test: blockdev nvme passthru vendor specific ...[2024-04-18 17:00:53.855667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:38.248 [2024-04-18 17:00:53.855695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:38.248 [2024-04-18 17:00:53.855861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:38.248 [2024-04-18 17:00:53.855884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:38.248 [2024-04-18 17:00:53.856051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:38.248 [2024-04-18 17:00:53.856073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:38.248 [2024-04-18 17:00:53.856243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:38.248 [2024-04-18 17:00:53.856265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:38.248 passed 00:13:38.248 Test: blockdev nvme admin passthru ...passed 00:13:38.248 Test: blockdev copy ...passed 00:13:38.248 00:13:38.248 Run Summary: Type Total Ran Passed Failed Inactive 00:13:38.248 suites 1 1 n/a 0 0 00:13:38.248 tests 23 23 23 0 0 00:13:38.248 asserts 152 152 152 0 n/a 00:13:38.248 00:13:38.248 Elapsed time = 1.320 seconds 00:13:38.817 17:00:54 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.817 17:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:38.817 17:00:54 -- common/autotest_common.sh@10 -- # set +x 00:13:38.817 17:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:38.817 17:00:54 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:38.817 17:00:54 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:38.817 17:00:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:38.817 17:00:54 -- nvmf/common.sh@117 -- # sync 00:13:38.817 17:00:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:38.817 17:00:54 -- nvmf/common.sh@120 -- # set +e 00:13:38.817 17:00:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:38.817 17:00:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:38.817 rmmod nvme_tcp 00:13:38.817 rmmod nvme_fabrics 00:13:38.817 rmmod nvme_keyring 00:13:38.817 17:00:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:38.817 17:00:54 -- nvmf/common.sh@124 -- # set -e 00:13:38.817 17:00:54 -- nvmf/common.sh@125 -- # return 0 00:13:38.817 17:00:54 -- nvmf/common.sh@478 -- # '[' -n 1682634 ']' 00:13:38.817 17:00:54 -- nvmf/common.sh@479 -- # killprocess 1682634 00:13:38.817 17:00:54 -- common/autotest_common.sh@936 -- # '[' -z 1682634 ']' 00:13:38.817 17:00:54 -- common/autotest_common.sh@940 -- # kill -0 1682634 00:13:38.817 17:00:54 -- common/autotest_common.sh@941 -- # uname 00:13:38.817 17:00:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:38.817 17:00:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1682634 00:13:38.817 17:00:54 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:38.817 17:00:54 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:38.817 17:00:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1682634' 00:13:38.817 killing process with pid 1682634 00:13:38.817 17:00:54 -- common/autotest_common.sh@955 -- # kill 1682634 00:13:38.817 17:00:54 -- common/autotest_common.sh@960 -- # wait 1682634 00:13:39.219 17:00:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:39.219 17:00:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:39.219 17:00:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:39.219 17:00:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.219 17:00:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.219 17:00:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.219 17:00:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.219 17:00:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.126 17:00:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:41.126 00:13:41.126 real 0m6.496s 00:13:41.126 user 0m11.185s 00:13:41.126 sys 0m2.442s 00:13:41.126 17:00:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:41.126 17:00:56 -- common/autotest_common.sh@10 -- # set +x 00:13:41.126 ************************************ 00:13:41.126 END TEST nvmf_bdevio_no_huge 00:13:41.126 ************************************ 00:13:41.384 17:00:56 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:41.384 17:00:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:41.384 17:00:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.384 17:00:56 -- common/autotest_common.sh@10 -- # set +x 00:13:41.384 ************************************ 00:13:41.384 START TEST nvmf_tls 00:13:41.384 ************************************ 00:13:41.384 17:00:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:41.384 * Looking for test storage... 00:13:41.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.384 17:00:56 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.384 17:00:56 -- nvmf/common.sh@7 -- # uname -s 00:13:41.384 17:00:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.384 17:00:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.384 17:00:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.384 17:00:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.384 17:00:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.384 17:00:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.384 17:00:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.384 17:00:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.384 17:00:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.384 17:00:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.384 17:00:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.384 17:00:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.384 17:00:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.384 17:00:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.384 17:00:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.384 17:00:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.384 17:00:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.384 17:00:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.384 17:00:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.384 17:00:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.384 17:00:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.384 17:00:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.384 17:00:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.384 17:00:57 -- paths/export.sh@5 -- # export PATH 00:13:41.384 17:00:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.384 17:00:57 -- nvmf/common.sh@47 -- # : 0 00:13:41.384 17:00:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.384 17:00:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.384 17:00:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.384 17:00:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.384 17:00:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.384 17:00:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.384 17:00:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.384 17:00:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.384 17:00:57 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.384 17:00:57 -- target/tls.sh@62 -- # nvmftestinit 00:13:41.384 17:00:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:41.384 17:00:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.384 17:00:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:41.384 17:00:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:41.384 17:00:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:41.384 17:00:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.384 17:00:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.384 17:00:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.384 17:00:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:41.384 17:00:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:41.384 17:00:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:41.384 17:00:57 -- common/autotest_common.sh@10 -- # set +x 00:13:43.289 17:00:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:43.289 17:00:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.289 17:00:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.289 17:00:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.289 17:00:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.289 17:00:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.289 17:00:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.289 17:00:58 -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.289 17:00:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.289 17:00:58 -- nvmf/common.sh@296 -- # e810=() 00:13:43.289 17:00:58 -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.289 17:00:58 -- nvmf/common.sh@297 -- # x722=() 00:13:43.289 17:00:58 -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.289 17:00:58 -- nvmf/common.sh@298 -- # mlx=() 00:13:43.289 17:00:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.289 17:00:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.289 17:00:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.289 17:00:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.289 17:00:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.289 17:00:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.289 17:00:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.289 17:00:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.289 17:00:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.289 17:00:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.289 17:00:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.289 17:00:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.289 17:00:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.289 17:00:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.289 17:00:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.289 17:00:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.289 17:00:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:43.289 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:43.289 17:00:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.289 17:00:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:43.289 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:43.289 17:00:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.289 17:00:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.289 17:00:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.289 17:00:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:43.289 17:00:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.289 17:00:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:43.289 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:43.289 17:00:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.289 17:00:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.289 17:00:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.289 17:00:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:43.289 17:00:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.289 17:00:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:43.289 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:43.289 17:00:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.289 17:00:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:43.289 17:00:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:43.289 17:00:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:43.289 17:00:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:43.289 17:00:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.289 17:00:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.289 17:00:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.289 17:00:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.289 17:00:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.289 17:00:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.289 17:00:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.289 17:00:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.289 17:00:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.289 17:00:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.289 17:00:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.289 17:00:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.289 17:00:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.289 17:00:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.289 17:00:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.289 17:00:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.289 17:00:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.548 17:00:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.548 17:00:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.548 17:00:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:13:43.548 00:13:43.548 --- 10.0.0.2 ping statistics --- 00:13:43.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.548 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:13:43.548 17:00:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:13:43.548 00:13:43.548 --- 10.0.0.1 ping statistics --- 00:13:43.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.548 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:13:43.548 17:00:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.548 17:00:59 -- nvmf/common.sh@411 -- # return 0 00:13:43.548 17:00:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:43.548 17:00:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.548 17:00:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:43.548 17:00:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:43.548 17:00:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.548 17:00:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:43.548 17:00:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:43.548 17:00:59 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:43.548 17:00:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:43.548 17:00:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:43.548 17:00:59 -- common/autotest_common.sh@10 -- # set +x 00:13:43.548 17:00:59 -- nvmf/common.sh@470 -- # nvmfpid=1684861 00:13:43.548 17:00:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:43.548 17:00:59 -- nvmf/common.sh@471 -- # waitforlisten 1684861 00:13:43.548 17:00:59 -- common/autotest_common.sh@817 -- # '[' -z 1684861 ']' 00:13:43.548 17:00:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.548 17:00:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:43.548 17:00:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.548 17:00:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:43.548 17:00:59 -- common/autotest_common.sh@10 -- # set +x 00:13:43.548 [2024-04-18 17:00:59.112687] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:13:43.548 [2024-04-18 17:00:59.112791] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.548 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.548 [2024-04-18 17:00:59.183627] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.806 [2024-04-18 17:00:59.297553] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.806 [2024-04-18 17:00:59.297621] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.806 [2024-04-18 17:00:59.297637] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.806 [2024-04-18 17:00:59.297650] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.806 [2024-04-18 17:00:59.297662] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.806 [2024-04-18 17:00:59.297702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.806 17:00:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:43.806 17:00:59 -- common/autotest_common.sh@850 -- # return 0 00:13:43.806 17:00:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:43.806 17:00:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:43.806 17:00:59 -- common/autotest_common.sh@10 -- # set +x 00:13:43.806 17:00:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.806 17:00:59 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:43.806 17:00:59 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:44.064 true 00:13:44.064 17:00:59 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:44.064 17:00:59 -- target/tls.sh@73 -- # jq -r .tls_version 00:13:44.324 17:00:59 -- target/tls.sh@73 -- # version=0 00:13:44.324 17:00:59 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:44.324 17:00:59 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:44.589 17:01:00 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:44.590 17:01:00 -- target/tls.sh@81 -- # jq -r .tls_version 00:13:44.886 17:01:00 -- target/tls.sh@81 -- # version=13 00:13:44.886 17:01:00 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:44.886 17:01:00 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:45.150 17:01:00 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:45.150 17:01:00 -- target/tls.sh@89 -- # jq -r .tls_version 00:13:45.410 17:01:00 -- target/tls.sh@89 -- # version=7 00:13:45.410 17:01:00 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:45.410 17:01:00 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:45.410 17:01:00 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:45.410 17:01:01 -- target/tls.sh@96 -- # ktls=false 00:13:45.410 17:01:01 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:45.410 17:01:01 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:45.668 17:01:01 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:45.668 17:01:01 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:45.926 17:01:01 -- target/tls.sh@104 -- # ktls=true 00:13:45.926 17:01:01 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:45.926 17:01:01 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:46.184 17:01:01 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:46.184 17:01:01 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:46.442 17:01:02 -- target/tls.sh@112 -- # ktls=false 00:13:46.442 17:01:02 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:46.442 17:01:02 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:46.442 17:01:02 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:46.442 17:01:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:46.442 17:01:02 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:46.442 17:01:02 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:13:46.442 17:01:02 -- nvmf/common.sh@693 -- # digest=1 00:13:46.442 17:01:02 -- nvmf/common.sh@694 -- # python - 00:13:46.442 17:01:02 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:46.442 17:01:02 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:46.442 17:01:02 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:46.442 17:01:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:46.442 17:01:02 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:46.442 17:01:02 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:13:46.442 17:01:02 -- nvmf/common.sh@693 -- # digest=1 00:13:46.442 17:01:02 -- nvmf/common.sh@694 -- # python - 00:13:46.702 17:01:02 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:46.702 17:01:02 -- target/tls.sh@121 -- # mktemp 00:13:46.702 17:01:02 -- target/tls.sh@121 -- # key_path=/tmp/tmp.weTem5L5gd 00:13:46.702 17:01:02 -- target/tls.sh@122 -- # mktemp 00:13:46.702 17:01:02 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.UnrreeMArB 00:13:46.702 17:01:02 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:46.702 17:01:02 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:46.702 17:01:02 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.weTem5L5gd 00:13:46.702 17:01:02 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.UnrreeMArB 00:13:46.702 17:01:02 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:46.960 17:01:02 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:13:47.218 17:01:02 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.weTem5L5gd 00:13:47.218 17:01:02 -- target/tls.sh@49 -- # local key=/tmp/tmp.weTem5L5gd 00:13:47.218 17:01:02 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:47.476 [2024-04-18 17:01:03.053223] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.476 17:01:03 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:47.733 17:01:03 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:47.991 [2024-04-18 17:01:03.598716] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:47.991 [2024-04-18 17:01:03.598979] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.991 17:01:03 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:48.249 malloc0 00:13:48.249 17:01:03 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:48.507 17:01:04 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.weTem5L5gd 00:13:48.767 [2024-04-18 17:01:04.356786] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:48.768 17:01:04 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.weTem5L5gd 00:13:48.768 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.998 Initializing NVMe Controllers 00:14:00.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:00.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:00.998 Initialization complete. Launching workers. 00:14:00.998 ======================================================== 00:14:00.998 Latency(us) 00:14:00.998 Device Information : IOPS MiB/s Average min max 00:14:00.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7704.89 30.10 8309.08 1322.98 9620.79 00:14:00.998 ======================================================== 00:14:00.998 Total : 7704.89 30.10 8309.08 1322.98 9620.79 00:14:00.998 00:14:00.998 17:01:14 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.weTem5L5gd 00:14:00.998 17:01:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:00.998 17:01:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:00.998 17:01:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:00.998 17:01:14 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.weTem5L5gd' 00:14:00.998 17:01:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:00.998 17:01:14 -- target/tls.sh@28 -- # bdevperf_pid=1686757 00:14:00.998 17:01:14 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:00.998 17:01:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:00.998 17:01:14 -- target/tls.sh@31 -- # waitforlisten 1686757 /var/tmp/bdevperf.sock 00:14:00.998 17:01:14 -- common/autotest_common.sh@817 -- # '[' -z 1686757 ']' 00:14:00.998 17:01:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:00.998 17:01:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:00.998 17:01:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:00.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:00.998 17:01:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:00.998 17:01:14 -- common/autotest_common.sh@10 -- # set +x 00:14:00.998 [2024-04-18 17:01:14.522887] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:00.998 [2024-04-18 17:01:14.522979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1686757 ] 00:14:00.998 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.998 [2024-04-18 17:01:14.580489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.998 [2024-04-18 17:01:14.683172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.998 17:01:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:00.998 17:01:14 -- common/autotest_common.sh@850 -- # return 0 00:14:00.998 17:01:14 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.weTem5L5gd 00:14:00.998 [2024-04-18 17:01:15.056016] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:00.998 [2024-04-18 17:01:15.056124] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:00.998 TLSTESTn1 00:14:00.998 17:01:15 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:00.998 Running I/O for 10 seconds... 00:14:10.987 00:14:10.987 Latency(us) 00:14:10.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.987 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:10.987 Verification LBA range: start 0x0 length 0x2000 00:14:10.987 TLSTESTn1 : 10.03 2398.85 9.37 0.00 0.00 53267.17 7524.50 67186.54 00:14:10.987 =================================================================================================================== 00:14:10.987 Total : 2398.85 9.37 0.00 0.00 53267.17 7524.50 67186.54 00:14:10.987 0 00:14:10.987 17:01:25 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:10.987 17:01:25 -- target/tls.sh@45 -- # killprocess 1686757 00:14:10.987 17:01:25 -- common/autotest_common.sh@936 -- # '[' -z 1686757 ']' 00:14:10.988 17:01:25 -- common/autotest_common.sh@940 -- # kill -0 1686757 00:14:10.988 17:01:25 -- common/autotest_common.sh@941 -- # uname 00:14:10.988 17:01:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:10.988 17:01:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1686757 00:14:10.988 17:01:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:10.988 17:01:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:10.988 17:01:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1686757' 00:14:10.988 killing process with pid 1686757 00:14:10.988 17:01:25 -- common/autotest_common.sh@955 -- # kill 1686757 00:14:10.988 Received shutdown signal, test time was about 10.000000 seconds 00:14:10.988 00:14:10.988 Latency(us) 00:14:10.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.988 =================================================================================================================== 00:14:10.988 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:10.988 [2024-04-18 17:01:25.356493] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:10.988 17:01:25 -- common/autotest_common.sh@960 -- # wait 1686757 00:14:10.988 17:01:25 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UnrreeMArB 00:14:10.988 17:01:25 -- common/autotest_common.sh@638 -- # local es=0 00:14:10.988 17:01:25 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UnrreeMArB 00:14:10.988 17:01:25 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:10.988 17:01:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:10.988 17:01:25 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:10.988 17:01:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:10.988 17:01:25 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UnrreeMArB 00:14:10.988 17:01:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:10.988 17:01:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:10.988 17:01:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:10.988 17:01:25 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UnrreeMArB' 00:14:10.988 17:01:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.988 17:01:25 -- target/tls.sh@28 -- # bdevperf_pid=1688008 00:14:10.988 17:01:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:10.988 17:01:25 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:10.988 17:01:25 -- target/tls.sh@31 -- # waitforlisten 1688008 /var/tmp/bdevperf.sock 00:14:10.988 17:01:25 -- common/autotest_common.sh@817 -- # '[' -z 1688008 ']' 00:14:10.988 17:01:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.988 17:01:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:10.988 17:01:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.988 17:01:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:10.988 17:01:25 -- common/autotest_common.sh@10 -- # set +x 00:14:10.988 [2024-04-18 17:01:25.667114] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:10.988 [2024-04-18 17:01:25.667202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688008 ] 00:14:10.988 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.988 [2024-04-18 17:01:25.728905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.988 [2024-04-18 17:01:25.831756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.988 17:01:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:10.988 17:01:25 -- common/autotest_common.sh@850 -- # return 0 00:14:10.988 17:01:25 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UnrreeMArB 00:14:10.988 [2024-04-18 17:01:26.165247] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:10.988 [2024-04-18 17:01:26.165376] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:10.988 [2024-04-18 17:01:26.170817] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:10.988 [2024-04-18 17:01:26.171220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd87230 (107): Transport endpoint is not connected 00:14:10.988 [2024-04-18 17:01:26.172208] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd87230 (9): Bad file descriptor 00:14:10.988 [2024-04-18 17:01:26.173207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:10.988 [2024-04-18 17:01:26.173227] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:10.988 [2024-04-18 17:01:26.173239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:10.988 request: 00:14:10.988 { 00:14:10.988 "name": "TLSTEST", 00:14:10.988 "trtype": "tcp", 00:14:10.988 "traddr": "10.0.0.2", 00:14:10.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.988 "adrfam": "ipv4", 00:14:10.988 "trsvcid": "4420", 00:14:10.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.988 "psk": "/tmp/tmp.UnrreeMArB", 00:14:10.988 "method": "bdev_nvme_attach_controller", 00:14:10.988 "req_id": 1 00:14:10.988 } 00:14:10.988 Got JSON-RPC error response 00:14:10.988 response: 00:14:10.988 { 00:14:10.988 "code": -32602, 00:14:10.988 "message": "Invalid parameters" 00:14:10.988 } 00:14:10.988 17:01:26 -- target/tls.sh@36 -- # killprocess 1688008 00:14:10.988 17:01:26 -- common/autotest_common.sh@936 -- # '[' -z 1688008 ']' 00:14:10.988 17:01:26 -- common/autotest_common.sh@940 -- # kill -0 1688008 00:14:10.988 17:01:26 -- common/autotest_common.sh@941 -- # uname 00:14:10.988 17:01:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:10.988 17:01:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1688008 00:14:10.988 17:01:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:10.988 17:01:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:10.988 17:01:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1688008' 00:14:10.988 killing process with pid 1688008 00:14:10.988 17:01:26 -- common/autotest_common.sh@955 -- # kill 1688008 00:14:10.988 Received shutdown signal, test time was about 10.000000 seconds 00:14:10.988 00:14:10.988 Latency(us) 00:14:10.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.988 =================================================================================================================== 00:14:10.988 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:10.988 [2024-04-18 17:01:26.215645] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:10.988 17:01:26 -- common/autotest_common.sh@960 -- # wait 1688008 00:14:10.988 17:01:26 -- target/tls.sh@37 -- # return 1 00:14:10.988 17:01:26 -- common/autotest_common.sh@641 -- # es=1 00:14:10.988 17:01:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:10.988 17:01:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:10.988 17:01:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:10.988 17:01:26 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.weTem5L5gd 00:14:10.988 17:01:26 -- common/autotest_common.sh@638 -- # local es=0 00:14:10.988 17:01:26 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.weTem5L5gd 00:14:10.988 17:01:26 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:10.988 17:01:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:10.988 17:01:26 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:10.988 17:01:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:10.988 17:01:26 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.weTem5L5gd 00:14:10.988 17:01:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:10.988 17:01:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:10.988 17:01:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:10.988 17:01:26 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.weTem5L5gd' 00:14:10.988 17:01:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.988 17:01:26 -- target/tls.sh@28 -- # bdevperf_pid=1688096 00:14:10.988 17:01:26 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:10.988 17:01:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:10.988 17:01:26 -- target/tls.sh@31 -- # waitforlisten 1688096 /var/tmp/bdevperf.sock 00:14:10.988 17:01:26 -- common/autotest_common.sh@817 -- # '[' -z 1688096 ']' 00:14:10.988 17:01:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.989 17:01:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:10.989 17:01:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.989 17:01:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:10.989 17:01:26 -- common/autotest_common.sh@10 -- # set +x 00:14:10.989 [2024-04-18 17:01:26.485230] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:10.989 [2024-04-18 17:01:26.485320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688096 ] 00:14:10.989 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.989 [2024-04-18 17:01:26.543721] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.989 [2024-04-18 17:01:26.650118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.248 17:01:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:11.248 17:01:26 -- common/autotest_common.sh@850 -- # return 0 00:14:11.248 17:01:26 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.weTem5L5gd 00:14:11.507 [2024-04-18 17:01:26.994538] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:11.507 [2024-04-18 17:01:26.994659] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:11.507 [2024-04-18 17:01:27.001479] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:11.507 [2024-04-18 17:01:27.001512] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:11.507 [2024-04-18 17:01:27.001576] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:11.507 [2024-04-18 17:01:27.002540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc22230 (107): Transport endpoint is not connected 00:14:11.507 [2024-04-18 17:01:27.003530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc22230 (9): Bad file descriptor 00:14:11.507 [2024-04-18 17:01:27.004528] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:11.507 [2024-04-18 17:01:27.004549] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:11.508 [2024-04-18 17:01:27.004561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:11.508 request: 00:14:11.508 { 00:14:11.508 "name": "TLSTEST", 00:14:11.508 "trtype": "tcp", 00:14:11.508 "traddr": "10.0.0.2", 00:14:11.508 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:11.508 "adrfam": "ipv4", 00:14:11.508 "trsvcid": "4420", 00:14:11.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.508 "psk": "/tmp/tmp.weTem5L5gd", 00:14:11.508 "method": "bdev_nvme_attach_controller", 00:14:11.508 "req_id": 1 00:14:11.508 } 00:14:11.508 Got JSON-RPC error response 00:14:11.508 response: 00:14:11.508 { 00:14:11.508 "code": -32602, 00:14:11.508 "message": "Invalid parameters" 00:14:11.508 } 00:14:11.508 17:01:27 -- target/tls.sh@36 -- # killprocess 1688096 00:14:11.508 17:01:27 -- common/autotest_common.sh@936 -- # '[' -z 1688096 ']' 00:14:11.508 17:01:27 -- common/autotest_common.sh@940 -- # kill -0 1688096 00:14:11.508 17:01:27 -- common/autotest_common.sh@941 -- # uname 00:14:11.508 17:01:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:11.508 17:01:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1688096 00:14:11.508 17:01:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:11.508 17:01:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:11.508 17:01:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1688096' 00:14:11.508 killing process with pid 1688096 00:14:11.508 17:01:27 -- common/autotest_common.sh@955 -- # kill 1688096 00:14:11.508 Received shutdown signal, test time was about 10.000000 seconds 00:14:11.508 00:14:11.508 Latency(us) 00:14:11.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.508 =================================================================================================================== 00:14:11.508 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:11.508 [2024-04-18 17:01:27.055333] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:11.508 17:01:27 -- common/autotest_common.sh@960 -- # wait 1688096 00:14:11.769 17:01:27 -- target/tls.sh@37 -- # return 1 00:14:11.769 17:01:27 -- common/autotest_common.sh@641 -- # es=1 00:14:11.769 17:01:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:11.769 17:01:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:11.769 17:01:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:11.769 17:01:27 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.weTem5L5gd 00:14:11.769 17:01:27 -- common/autotest_common.sh@638 -- # local es=0 00:14:11.770 17:01:27 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.weTem5L5gd 00:14:11.770 17:01:27 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:11.770 17:01:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:11.770 17:01:27 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:11.770 17:01:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:11.770 17:01:27 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.weTem5L5gd 00:14:11.770 17:01:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:11.770 17:01:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:11.770 17:01:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:11.770 17:01:27 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.weTem5L5gd' 00:14:11.770 17:01:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:11.770 17:01:27 -- target/tls.sh@28 -- # bdevperf_pid=1688235 00:14:11.770 17:01:27 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:11.770 17:01:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:11.770 17:01:27 -- target/tls.sh@31 -- # waitforlisten 1688235 /var/tmp/bdevperf.sock 00:14:11.770 17:01:27 -- common/autotest_common.sh@817 -- # '[' -z 1688235 ']' 00:14:11.770 17:01:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.770 17:01:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:11.770 17:01:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.770 17:01:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:11.770 17:01:27 -- common/autotest_common.sh@10 -- # set +x 00:14:11.770 [2024-04-18 17:01:27.362319] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:11.770 [2024-04-18 17:01:27.362420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688235 ] 00:14:11.770 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.770 [2024-04-18 17:01:27.427489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.030 [2024-04-18 17:01:27.538241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.030 17:01:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:12.030 17:01:27 -- common/autotest_common.sh@850 -- # return 0 00:14:12.030 17:01:27 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.weTem5L5gd 00:14:12.288 [2024-04-18 17:01:27.874605] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:12.288 [2024-04-18 17:01:27.874731] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:12.288 [2024-04-18 17:01:27.883910] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:12.288 [2024-04-18 17:01:27.883944] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:12.288 [2024-04-18 17:01:27.883993] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:12.288 [2024-04-18 17:01:27.884888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1f230 (107): Transport endpoint is not connected 00:14:12.288 [2024-04-18 17:01:27.885877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1f230 (9): Bad file descriptor 00:14:12.288 [2024-04-18 17:01:27.886879] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:12.288 [2024-04-18 17:01:27.886899] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:12.288 [2024-04-18 17:01:27.886912] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:12.289 request: 00:14:12.289 { 00:14:12.289 "name": "TLSTEST", 00:14:12.289 "trtype": "tcp", 00:14:12.289 "traddr": "10.0.0.2", 00:14:12.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.289 "adrfam": "ipv4", 00:14:12.289 "trsvcid": "4420", 00:14:12.289 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:12.289 "psk": "/tmp/tmp.weTem5L5gd", 00:14:12.289 "method": "bdev_nvme_attach_controller", 00:14:12.289 "req_id": 1 00:14:12.289 } 00:14:12.289 Got JSON-RPC error response 00:14:12.289 response: 00:14:12.289 { 00:14:12.289 "code": -32602, 00:14:12.289 "message": "Invalid parameters" 00:14:12.289 } 00:14:12.289 17:01:27 -- target/tls.sh@36 -- # killprocess 1688235 00:14:12.289 17:01:27 -- common/autotest_common.sh@936 -- # '[' -z 1688235 ']' 00:14:12.289 17:01:27 -- common/autotest_common.sh@940 -- # kill -0 1688235 00:14:12.289 17:01:27 -- common/autotest_common.sh@941 -- # uname 00:14:12.289 17:01:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:12.289 17:01:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1688235 00:14:12.289 17:01:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:12.289 17:01:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:12.289 17:01:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1688235' 00:14:12.289 killing process with pid 1688235 00:14:12.289 17:01:27 -- common/autotest_common.sh@955 -- # kill 1688235 00:14:12.289 Received shutdown signal, test time was about 10.000000 seconds 00:14:12.289 00:14:12.289 Latency(us) 00:14:12.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.289 =================================================================================================================== 00:14:12.289 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:12.289 [2024-04-18 17:01:27.933928] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:12.289 17:01:27 -- common/autotest_common.sh@960 -- # wait 1688235 00:14:12.547 17:01:28 -- target/tls.sh@37 -- # return 1 00:14:12.547 17:01:28 -- common/autotest_common.sh@641 -- # es=1 00:14:12.547 17:01:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:12.548 17:01:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:12.548 17:01:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:12.548 17:01:28 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:12.548 17:01:28 -- common/autotest_common.sh@638 -- # local es=0 00:14:12.548 17:01:28 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:12.548 17:01:28 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:12.548 17:01:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:12.548 17:01:28 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:12.548 17:01:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:12.548 17:01:28 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:12.548 17:01:28 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:12.548 17:01:28 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:12.548 17:01:28 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:12.548 17:01:28 -- target/tls.sh@23 -- # psk= 00:14:12.548 17:01:28 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:12.548 17:01:28 -- target/tls.sh@28 -- # bdevperf_pid=1688375 00:14:12.548 17:01:28 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:12.548 17:01:28 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:12.548 17:01:28 -- target/tls.sh@31 -- # waitforlisten 1688375 /var/tmp/bdevperf.sock 00:14:12.548 17:01:28 -- common/autotest_common.sh@817 -- # '[' -z 1688375 ']' 00:14:12.548 17:01:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:12.548 17:01:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:12.548 17:01:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:12.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:12.548 17:01:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:12.548 17:01:28 -- common/autotest_common.sh@10 -- # set +x 00:14:12.548 [2024-04-18 17:01:28.220660] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:12.548 [2024-04-18 17:01:28.220750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688375 ] 00:14:12.548 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.806 [2024-04-18 17:01:28.279059] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.806 [2024-04-18 17:01:28.379324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.806 17:01:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:12.806 17:01:28 -- common/autotest_common.sh@850 -- # return 0 00:14:12.806 17:01:28 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:13.065 [2024-04-18 17:01:28.716130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:13.065 [2024-04-18 17:01:28.717732] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x892ba0 (9): Bad file descriptor 00:14:13.065 [2024-04-18 17:01:28.718727] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:13.065 [2024-04-18 17:01:28.718748] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:13.065 [2024-04-18 17:01:28.718760] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:13.065 request: 00:14:13.065 { 00:14:13.065 "name": "TLSTEST", 00:14:13.065 "trtype": "tcp", 00:14:13.065 "traddr": "10.0.0.2", 00:14:13.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:13.065 "adrfam": "ipv4", 00:14:13.065 "trsvcid": "4420", 00:14:13.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.065 "method": "bdev_nvme_attach_controller", 00:14:13.065 "req_id": 1 00:14:13.065 } 00:14:13.065 Got JSON-RPC error response 00:14:13.065 response: 00:14:13.065 { 00:14:13.065 "code": -32602, 00:14:13.065 "message": "Invalid parameters" 00:14:13.065 } 00:14:13.065 17:01:28 -- target/tls.sh@36 -- # killprocess 1688375 00:14:13.065 17:01:28 -- common/autotest_common.sh@936 -- # '[' -z 1688375 ']' 00:14:13.065 17:01:28 -- common/autotest_common.sh@940 -- # kill -0 1688375 00:14:13.065 17:01:28 -- common/autotest_common.sh@941 -- # uname 00:14:13.065 17:01:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:13.065 17:01:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1688375 00:14:13.065 17:01:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:13.065 17:01:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:13.065 17:01:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1688375' 00:14:13.065 killing process with pid 1688375 00:14:13.065 17:01:28 -- common/autotest_common.sh@955 -- # kill 1688375 00:14:13.065 Received shutdown signal, test time was about 10.000000 seconds 00:14:13.065 00:14:13.065 Latency(us) 00:14:13.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.065 =================================================================================================================== 00:14:13.065 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:13.065 17:01:28 -- common/autotest_common.sh@960 -- # wait 1688375 00:14:13.324 17:01:29 -- target/tls.sh@37 -- # return 1 00:14:13.324 17:01:29 -- common/autotest_common.sh@641 -- # es=1 00:14:13.324 17:01:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:13.324 17:01:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:13.324 17:01:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:13.324 17:01:29 -- target/tls.sh@158 -- # killprocess 1684861 00:14:13.324 17:01:29 -- common/autotest_common.sh@936 -- # '[' -z 1684861 ']' 00:14:13.324 17:01:29 -- common/autotest_common.sh@940 -- # kill -0 1684861 00:14:13.324 17:01:29 -- common/autotest_common.sh@941 -- # uname 00:14:13.324 17:01:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:13.324 17:01:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1684861 00:14:13.584 17:01:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:13.584 17:01:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:13.584 17:01:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1684861' 00:14:13.584 killing process with pid 1684861 00:14:13.584 17:01:29 -- common/autotest_common.sh@955 -- # kill 1684861 00:14:13.584 [2024-04-18 17:01:29.052989] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:13.584 17:01:29 -- common/autotest_common.sh@960 -- # wait 1684861 00:14:13.843 17:01:29 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:13.843 17:01:29 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:13.843 17:01:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:13.843 17:01:29 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:13.843 17:01:29 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:13.843 17:01:29 -- nvmf/common.sh@693 -- # digest=2 00:14:13.843 17:01:29 -- nvmf/common.sh@694 -- # python - 00:14:13.843 17:01:29 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:13.843 17:01:29 -- target/tls.sh@160 -- # mktemp 00:14:13.843 17:01:29 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.rWc4tKrkDB 00:14:13.843 17:01:29 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:13.843 17:01:29 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.rWc4tKrkDB 00:14:13.843 17:01:29 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:13.843 17:01:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:13.843 17:01:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:13.843 17:01:29 -- common/autotest_common.sh@10 -- # set +x 00:14:13.843 17:01:29 -- nvmf/common.sh@470 -- # nvmfpid=1688525 00:14:13.843 17:01:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:13.843 17:01:29 -- nvmf/common.sh@471 -- # waitforlisten 1688525 00:14:13.843 17:01:29 -- common/autotest_common.sh@817 -- # '[' -z 1688525 ']' 00:14:13.843 17:01:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.843 17:01:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:13.843 17:01:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.843 17:01:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:13.843 17:01:29 -- common/autotest_common.sh@10 -- # set +x 00:14:13.843 [2024-04-18 17:01:29.444896] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:13.843 [2024-04-18 17:01:29.444978] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.843 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.843 [2024-04-18 17:01:29.511396] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.102 [2024-04-18 17:01:29.626323] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.102 [2024-04-18 17:01:29.626398] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.102 [2024-04-18 17:01:29.626419] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.102 [2024-04-18 17:01:29.626433] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.102 [2024-04-18 17:01:29.626445] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.102 [2024-04-18 17:01:29.626484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.038 17:01:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:15.038 17:01:30 -- common/autotest_common.sh@850 -- # return 0 00:14:15.038 17:01:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:15.038 17:01:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:15.038 17:01:30 -- common/autotest_common.sh@10 -- # set +x 00:14:15.038 17:01:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.038 17:01:30 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.rWc4tKrkDB 00:14:15.038 17:01:30 -- target/tls.sh@49 -- # local key=/tmp/tmp.rWc4tKrkDB 00:14:15.038 17:01:30 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:15.038 [2024-04-18 17:01:30.629649] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.038 17:01:30 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:15.298 17:01:30 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:15.603 [2024-04-18 17:01:31.102905] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:15.603 [2024-04-18 17:01:31.103182] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.603 17:01:31 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:15.861 malloc0 00:14:15.861 17:01:31 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:16.119 17:01:31 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rWc4tKrkDB 00:14:16.378 [2024-04-18 17:01:31.832552] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:16.378 17:01:31 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rWc4tKrkDB 00:14:16.378 17:01:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:16.378 17:01:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:16.378 17:01:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:16.378 17:01:31 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rWc4tKrkDB' 00:14:16.378 17:01:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:16.378 17:01:31 -- target/tls.sh@28 -- # bdevperf_pid=1688815 00:14:16.378 17:01:31 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:16.378 17:01:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:16.378 17:01:31 -- target/tls.sh@31 -- # waitforlisten 1688815 /var/tmp/bdevperf.sock 00:14:16.378 17:01:31 -- common/autotest_common.sh@817 -- # '[' -z 1688815 ']' 00:14:16.378 17:01:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:16.378 17:01:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:16.378 17:01:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:16.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:16.378 17:01:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:16.378 17:01:31 -- common/autotest_common.sh@10 -- # set +x 00:14:16.378 [2024-04-18 17:01:31.896141] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:16.378 [2024-04-18 17:01:31.896230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688815 ] 00:14:16.378 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.378 [2024-04-18 17:01:31.960075] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.378 [2024-04-18 17:01:32.077562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.312 17:01:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:17.312 17:01:32 -- common/autotest_common.sh@850 -- # return 0 00:14:17.312 17:01:32 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rWc4tKrkDB 00:14:17.571 [2024-04-18 17:01:33.038176] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:17.571 [2024-04-18 17:01:33.038313] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:17.571 TLSTESTn1 00:14:17.571 17:01:33 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:17.571 Running I/O for 10 seconds... 00:14:29.801 00:14:29.801 Latency(us) 00:14:29.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.801 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:29.801 Verification LBA range: start 0x0 length 0x2000 00:14:29.801 TLSTESTn1 : 10.02 3234.88 12.64 0.00 0.00 39499.23 9077.95 40583.77 00:14:29.801 =================================================================================================================== 00:14:29.801 Total : 3234.88 12.64 0.00 0.00 39499.23 9077.95 40583.77 00:14:29.801 0 00:14:29.801 17:01:43 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:29.801 17:01:43 -- target/tls.sh@45 -- # killprocess 1688815 00:14:29.801 17:01:43 -- common/autotest_common.sh@936 -- # '[' -z 1688815 ']' 00:14:29.801 17:01:43 -- common/autotest_common.sh@940 -- # kill -0 1688815 00:14:29.801 17:01:43 -- common/autotest_common.sh@941 -- # uname 00:14:29.801 17:01:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:29.801 17:01:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1688815 00:14:29.801 17:01:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:29.801 17:01:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:29.801 17:01:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1688815' 00:14:29.801 killing process with pid 1688815 00:14:29.801 17:01:43 -- common/autotest_common.sh@955 -- # kill 1688815 00:14:29.801 Received shutdown signal, test time was about 10.000000 seconds 00:14:29.801 00:14:29.801 Latency(us) 00:14:29.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.801 =================================================================================================================== 00:14:29.801 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.801 [2024-04-18 17:01:43.329989] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:29.801 17:01:43 -- common/autotest_common.sh@960 -- # wait 1688815 00:14:29.801 17:01:43 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.rWc4tKrkDB 00:14:29.801 17:01:43 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rWc4tKrkDB 00:14:29.801 17:01:43 -- common/autotest_common.sh@638 -- # local es=0 00:14:29.801 17:01:43 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rWc4tKrkDB 00:14:29.801 17:01:43 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:29.801 17:01:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:29.801 17:01:43 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:29.801 17:01:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:29.801 17:01:43 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rWc4tKrkDB 00:14:29.801 17:01:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:29.801 17:01:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:29.801 17:01:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:29.802 17:01:43 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rWc4tKrkDB' 00:14:29.802 17:01:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:29.802 17:01:43 -- target/tls.sh@28 -- # bdevperf_pid=1690147 00:14:29.802 17:01:43 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:29.802 17:01:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:29.802 17:01:43 -- target/tls.sh@31 -- # waitforlisten 1690147 /var/tmp/bdevperf.sock 00:14:29.802 17:01:43 -- common/autotest_common.sh@817 -- # '[' -z 1690147 ']' 00:14:29.802 17:01:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.802 17:01:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:29.802 17:01:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.802 17:01:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:29.802 17:01:43 -- common/autotest_common.sh@10 -- # set +x 00:14:29.802 [2024-04-18 17:01:43.610797] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:29.802 [2024-04-18 17:01:43.610894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1690147 ] 00:14:29.802 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.802 [2024-04-18 17:01:43.668687] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.802 [2024-04-18 17:01:43.776227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.802 17:01:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:29.802 17:01:43 -- common/autotest_common.sh@850 -- # return 0 00:14:29.802 17:01:43 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rWc4tKrkDB 00:14:29.802 [2024-04-18 17:01:44.098067] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:29.802 [2024-04-18 17:01:44.098156] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:29.802 [2024-04-18 17:01:44.098170] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.rWc4tKrkDB 00:14:29.802 request: 00:14:29.802 { 00:14:29.802 "name": "TLSTEST", 00:14:29.802 "trtype": "tcp", 00:14:29.802 "traddr": "10.0.0.2", 00:14:29.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:29.802 "adrfam": "ipv4", 00:14:29.802 "trsvcid": "4420", 00:14:29.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.802 "psk": "/tmp/tmp.rWc4tKrkDB", 00:14:29.802 "method": "bdev_nvme_attach_controller", 00:14:29.802 "req_id": 1 00:14:29.802 } 00:14:29.802 Got JSON-RPC error response 00:14:29.802 response: 00:14:29.802 { 00:14:29.802 "code": -1, 00:14:29.802 "message": "Operation not permitted" 00:14:29.802 } 00:14:29.802 17:01:44 -- target/tls.sh@36 -- # killprocess 1690147 00:14:29.802 17:01:44 -- common/autotest_common.sh@936 -- # '[' -z 1690147 ']' 00:14:29.802 17:01:44 -- common/autotest_common.sh@940 -- # kill -0 1690147 00:14:29.802 17:01:44 -- common/autotest_common.sh@941 -- # uname 00:14:29.802 17:01:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:29.802 17:01:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1690147 00:14:29.802 17:01:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:29.802 17:01:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:29.802 17:01:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1690147' 00:14:29.802 killing process with pid 1690147 00:14:29.802 17:01:44 -- common/autotest_common.sh@955 -- # kill 1690147 00:14:29.802 Received shutdown signal, test time was about 10.000000 seconds 00:14:29.802 00:14:29.802 Latency(us) 00:14:29.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.802 =================================================================================================================== 00:14:29.802 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:29.802 17:01:44 -- common/autotest_common.sh@960 -- # wait 1690147 00:14:29.802 17:01:44 -- target/tls.sh@37 -- # return 1 00:14:29.802 17:01:44 -- common/autotest_common.sh@641 -- # es=1 00:14:29.802 17:01:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:29.802 17:01:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:29.802 17:01:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:29.802 17:01:44 -- target/tls.sh@174 -- # killprocess 1688525 00:14:29.802 17:01:44 -- common/autotest_common.sh@936 -- # '[' -z 1688525 ']' 00:14:29.802 17:01:44 -- common/autotest_common.sh@940 -- # kill -0 1688525 00:14:29.802 17:01:44 -- common/autotest_common.sh@941 -- # uname 00:14:29.802 17:01:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:29.802 17:01:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1688525 00:14:29.802 17:01:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:29.802 17:01:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:29.802 17:01:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1688525' 00:14:29.802 killing process with pid 1688525 00:14:29.802 17:01:44 -- common/autotest_common.sh@955 -- # kill 1688525 00:14:29.802 [2024-04-18 17:01:44.425045] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:29.802 17:01:44 -- common/autotest_common.sh@960 -- # wait 1688525 00:14:29.802 17:01:44 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:29.802 17:01:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:29.802 17:01:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:29.802 17:01:44 -- common/autotest_common.sh@10 -- # set +x 00:14:29.802 17:01:44 -- nvmf/common.sh@470 -- # nvmfpid=1690289 00:14:29.802 17:01:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:29.802 17:01:44 -- nvmf/common.sh@471 -- # waitforlisten 1690289 00:14:29.802 17:01:44 -- common/autotest_common.sh@817 -- # '[' -z 1690289 ']' 00:14:29.802 17:01:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.802 17:01:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:29.802 17:01:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.802 17:01:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:29.802 17:01:44 -- common/autotest_common.sh@10 -- # set +x 00:14:29.802 [2024-04-18 17:01:44.777988] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:29.802 [2024-04-18 17:01:44.778071] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.802 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.802 [2024-04-18 17:01:44.849229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.802 [2024-04-18 17:01:44.967345] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.802 [2024-04-18 17:01:44.967419] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.802 [2024-04-18 17:01:44.967447] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.802 [2024-04-18 17:01:44.967461] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.802 [2024-04-18 17:01:44.967473] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.802 [2024-04-18 17:01:44.967510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.061 17:01:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:30.061 17:01:45 -- common/autotest_common.sh@850 -- # return 0 00:14:30.061 17:01:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:30.061 17:01:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:30.061 17:01:45 -- common/autotest_common.sh@10 -- # set +x 00:14:30.061 17:01:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.061 17:01:45 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.rWc4tKrkDB 00:14:30.061 17:01:45 -- common/autotest_common.sh@638 -- # local es=0 00:14:30.061 17:01:45 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rWc4tKrkDB 00:14:30.061 17:01:45 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:14:30.061 17:01:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:30.061 17:01:45 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:14:30.061 17:01:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:30.061 17:01:45 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.rWc4tKrkDB 00:14:30.061 17:01:45 -- target/tls.sh@49 -- # local key=/tmp/tmp.rWc4tKrkDB 00:14:30.061 17:01:45 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:30.319 [2024-04-18 17:01:45.947483] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.319 17:01:45 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:30.577 17:01:46 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:30.835 [2024-04-18 17:01:46.440831] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:30.835 [2024-04-18 17:01:46.441104] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.835 17:01:46 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:31.093 malloc0 00:14:31.093 17:01:46 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:31.352 17:01:46 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rWc4tKrkDB 00:14:31.612 [2024-04-18 17:01:47.153989] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:31.612 [2024-04-18 17:01:47.154030] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:31.612 [2024-04-18 17:01:47.154078] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:14:31.612 request: 00:14:31.612 { 00:14:31.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.612 "host": "nqn.2016-06.io.spdk:host1", 00:14:31.612 "psk": "/tmp/tmp.rWc4tKrkDB", 00:14:31.612 "method": "nvmf_subsystem_add_host", 00:14:31.612 "req_id": 1 00:14:31.612 } 00:14:31.612 Got JSON-RPC error response 00:14:31.612 response: 00:14:31.612 { 00:14:31.612 "code": -32603, 00:14:31.612 "message": "Internal error" 00:14:31.612 } 00:14:31.612 17:01:47 -- common/autotest_common.sh@641 -- # es=1 00:14:31.612 17:01:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:31.612 17:01:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:31.612 17:01:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:31.612 17:01:47 -- target/tls.sh@180 -- # killprocess 1690289 00:14:31.612 17:01:47 -- common/autotest_common.sh@936 -- # '[' -z 1690289 ']' 00:14:31.612 17:01:47 -- common/autotest_common.sh@940 -- # kill -0 1690289 00:14:31.612 17:01:47 -- common/autotest_common.sh@941 -- # uname 00:14:31.612 17:01:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.612 17:01:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1690289 00:14:31.612 17:01:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:31.612 17:01:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:31.612 17:01:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1690289' 00:14:31.612 killing process with pid 1690289 00:14:31.612 17:01:47 -- common/autotest_common.sh@955 -- # kill 1690289 00:14:31.612 17:01:47 -- common/autotest_common.sh@960 -- # wait 1690289 00:14:31.871 17:01:47 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.rWc4tKrkDB 00:14:31.871 17:01:47 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:31.871 17:01:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:31.871 17:01:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:31.871 17:01:47 -- common/autotest_common.sh@10 -- # set +x 00:14:31.871 17:01:47 -- nvmf/common.sh@470 -- # nvmfpid=1690712 00:14:31.871 17:01:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:31.871 17:01:47 -- nvmf/common.sh@471 -- # waitforlisten 1690712 00:14:31.871 17:01:47 -- common/autotest_common.sh@817 -- # '[' -z 1690712 ']' 00:14:31.871 17:01:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.871 17:01:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:31.871 17:01:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.871 17:01:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:31.871 17:01:47 -- common/autotest_common.sh@10 -- # set +x 00:14:31.871 [2024-04-18 17:01:47.561059] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:31.871 [2024-04-18 17:01:47.561145] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.132 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.132 [2024-04-18 17:01:47.628024] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.132 [2024-04-18 17:01:47.741269] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.132 [2024-04-18 17:01:47.741339] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.132 [2024-04-18 17:01:47.741372] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.132 [2024-04-18 17:01:47.741397] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.132 [2024-04-18 17:01:47.741411] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.132 [2024-04-18 17:01:47.741443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.070 17:01:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:33.070 17:01:48 -- common/autotest_common.sh@850 -- # return 0 00:14:33.070 17:01:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:33.070 17:01:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:33.070 17:01:48 -- common/autotest_common.sh@10 -- # set +x 00:14:33.070 17:01:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.070 17:01:48 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.rWc4tKrkDB 00:14:33.070 17:01:48 -- target/tls.sh@49 -- # local key=/tmp/tmp.rWc4tKrkDB 00:14:33.070 17:01:48 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:33.329 [2024-04-18 17:01:48.802549] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.329 17:01:48 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:33.587 17:01:49 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:33.845 [2024-04-18 17:01:49.339963] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:33.845 [2024-04-18 17:01:49.340205] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.845 17:01:49 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:34.102 malloc0 00:14:34.102 17:01:49 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:34.363 17:01:49 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rWc4tKrkDB 00:14:34.621 [2024-04-18 17:01:50.133300] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:34.621 17:01:50 -- target/tls.sh@188 -- # bdevperf_pid=1691006 00:14:34.621 17:01:50 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:34.621 17:01:50 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:34.621 17:01:50 -- target/tls.sh@191 -- # waitforlisten 1691006 /var/tmp/bdevperf.sock 00:14:34.621 17:01:50 -- common/autotest_common.sh@817 -- # '[' -z 1691006 ']' 00:14:34.622 17:01:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:34.622 17:01:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:34.622 17:01:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:34.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:34.622 17:01:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:34.622 17:01:50 -- common/autotest_common.sh@10 -- # set +x 00:14:34.622 [2024-04-18 17:01:50.194798] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:34.622 [2024-04-18 17:01:50.194885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691006 ] 00:14:34.622 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.622 [2024-04-18 17:01:50.252854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.880 [2024-04-18 17:01:50.359569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.880 17:01:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:34.880 17:01:50 -- common/autotest_common.sh@850 -- # return 0 00:14:34.880 17:01:50 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rWc4tKrkDB 00:14:35.139 [2024-04-18 17:01:50.707439] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:35.139 [2024-04-18 17:01:50.707558] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:35.139 TLSTESTn1 00:14:35.139 17:01:50 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:14:35.705 17:01:51 -- target/tls.sh@196 -- # tgtconf='{ 00:14:35.705 "subsystems": [ 00:14:35.705 { 00:14:35.705 "subsystem": "keyring", 00:14:35.705 "config": [] 00:14:35.705 }, 00:14:35.705 { 00:14:35.705 "subsystem": "iobuf", 00:14:35.705 "config": [ 00:14:35.705 { 00:14:35.706 "method": "iobuf_set_options", 00:14:35.706 "params": { 00:14:35.706 "small_pool_count": 8192, 00:14:35.706 "large_pool_count": 1024, 00:14:35.706 "small_bufsize": 8192, 00:14:35.706 "large_bufsize": 135168 00:14:35.706 } 00:14:35.706 } 00:14:35.706 ] 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "subsystem": "sock", 00:14:35.706 "config": [ 00:14:35.706 { 00:14:35.706 "method": "sock_impl_set_options", 00:14:35.706 "params": { 00:14:35.706 "impl_name": "posix", 00:14:35.706 "recv_buf_size": 2097152, 00:14:35.706 "send_buf_size": 2097152, 00:14:35.706 "enable_recv_pipe": true, 00:14:35.706 "enable_quickack": false, 00:14:35.706 "enable_placement_id": 0, 00:14:35.706 "enable_zerocopy_send_server": true, 00:14:35.706 "enable_zerocopy_send_client": false, 00:14:35.706 "zerocopy_threshold": 0, 00:14:35.706 "tls_version": 0, 00:14:35.706 "enable_ktls": false 00:14:35.706 } 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "method": "sock_impl_set_options", 00:14:35.706 "params": { 00:14:35.706 "impl_name": "ssl", 00:14:35.706 "recv_buf_size": 4096, 00:14:35.706 "send_buf_size": 4096, 00:14:35.706 "enable_recv_pipe": true, 00:14:35.706 "enable_quickack": false, 00:14:35.706 "enable_placement_id": 0, 00:14:35.706 "enable_zerocopy_send_server": true, 00:14:35.706 "enable_zerocopy_send_client": false, 00:14:35.706 "zerocopy_threshold": 0, 00:14:35.706 "tls_version": 0, 00:14:35.706 "enable_ktls": false 00:14:35.706 } 00:14:35.706 } 00:14:35.706 ] 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "subsystem": "vmd", 00:14:35.706 "config": [] 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "subsystem": "accel", 00:14:35.706 "config": [ 00:14:35.706 { 00:14:35.706 "method": "accel_set_options", 00:14:35.706 "params": { 00:14:35.706 "small_cache_size": 128, 00:14:35.706 "large_cache_size": 16, 00:14:35.706 "task_count": 2048, 00:14:35.706 "sequence_count": 2048, 00:14:35.706 "buf_count": 2048 00:14:35.706 } 00:14:35.706 } 00:14:35.706 ] 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "subsystem": "bdev", 00:14:35.706 "config": [ 00:14:35.706 { 00:14:35.706 "method": "bdev_set_options", 00:14:35.706 "params": { 00:14:35.706 "bdev_io_pool_size": 65535, 00:14:35.706 "bdev_io_cache_size": 256, 00:14:35.706 "bdev_auto_examine": true, 00:14:35.706 "iobuf_small_cache_size": 128, 00:14:35.706 "iobuf_large_cache_size": 16 00:14:35.706 } 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "method": "bdev_raid_set_options", 00:14:35.706 "params": { 00:14:35.706 "process_window_size_kb": 1024 00:14:35.706 } 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "method": "bdev_iscsi_set_options", 00:14:35.706 "params": { 00:14:35.706 "timeout_sec": 30 00:14:35.706 } 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "method": "bdev_nvme_set_options", 00:14:35.706 "params": { 00:14:35.706 "action_on_timeout": "none", 00:14:35.706 "timeout_us": 0, 00:14:35.706 "timeout_admin_us": 0, 00:14:35.706 "keep_alive_timeout_ms": 10000, 00:14:35.706 "arbitration_burst": 0, 00:14:35.706 "low_priority_weight": 0, 00:14:35.706 "medium_priority_weight": 0, 00:14:35.706 "high_priority_weight": 0, 00:14:35.706 "nvme_adminq_poll_period_us": 10000, 00:14:35.706 "nvme_ioq_poll_period_us": 0, 00:14:35.706 "io_queue_requests": 0, 00:14:35.706 "delay_cmd_submit": true, 00:14:35.706 "transport_retry_count": 4, 00:14:35.706 "bdev_retry_count": 3, 00:14:35.706 "transport_ack_timeout": 0, 00:14:35.706 "ctrlr_loss_timeout_sec": 0, 00:14:35.706 "reconnect_delay_sec": 0, 00:14:35.706 "fast_io_fail_timeout_sec": 0, 00:14:35.706 "disable_auto_failback": false, 00:14:35.706 "generate_uuids": false, 00:14:35.706 "transport_tos": 0, 00:14:35.706 "nvme_error_stat": false, 00:14:35.706 "rdma_srq_size": 0, 00:14:35.706 "io_path_stat": false, 00:14:35.706 "allow_accel_sequence": false, 00:14:35.706 "rdma_max_cq_size": 0, 00:14:35.706 "rdma_cm_event_timeout_ms": 0, 00:14:35.706 "dhchap_digests": [ 00:14:35.706 "sha256", 00:14:35.706 "sha384", 00:14:35.706 "sha512" 00:14:35.706 ], 00:14:35.706 "dhchap_dhgroups": [ 00:14:35.706 "null", 00:14:35.706 "ffdhe2048", 00:14:35.706 "ffdhe3072", 00:14:35.706 "ffdhe4096", 00:14:35.706 "ffdhe6144", 00:14:35.706 "ffdhe8192" 00:14:35.706 ] 00:14:35.706 } 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "method": "bdev_nvme_set_hotplug", 00:14:35.706 "params": { 00:14:35.706 "period_us": 100000, 00:14:35.707 "enable": false 00:14:35.707 } 00:14:35.707 }, 00:14:35.707 { 00:14:35.707 "method": "bdev_malloc_create", 00:14:35.707 "params": { 00:14:35.707 "name": "malloc0", 00:14:35.707 "num_blocks": 8192, 00:14:35.707 "block_size": 4096, 00:14:35.707 "physical_block_size": 4096, 00:14:35.707 "uuid": "a174e782-b01a-4896-904d-55e0900e8c58", 00:14:35.707 "optimal_io_boundary": 0 00:14:35.707 } 00:14:35.707 }, 00:14:35.707 { 00:14:35.707 "method": "bdev_wait_for_examine" 00:14:35.707 } 00:14:35.707 ] 00:14:35.707 }, 00:14:35.707 { 00:14:35.707 "subsystem": "nbd", 00:14:35.707 "config": [] 00:14:35.707 }, 00:14:35.707 { 00:14:35.707 "subsystem": "scheduler", 00:14:35.707 "config": [ 00:14:35.707 { 00:14:35.707 "method": "framework_set_scheduler", 00:14:35.707 "params": { 00:14:35.707 "name": "static" 00:14:35.707 } 00:14:35.707 } 00:14:35.707 ] 00:14:35.707 }, 00:14:35.707 { 00:14:35.707 "subsystem": "nvmf", 00:14:35.707 "config": [ 00:14:35.707 { 00:14:35.707 "method": "nvmf_set_config", 00:14:35.707 "params": { 00:14:35.707 "discovery_filter": "match_any", 00:14:35.707 "admin_cmd_passthru": { 00:14:35.707 "identify_ctrlr": false 00:14:35.707 } 00:14:35.707 } 00:14:35.707 }, 00:14:35.707 { 00:14:35.707 "method": "nvmf_set_max_subsystems", 00:14:35.707 "params": { 00:14:35.707 "max_subsystems": 1024 00:14:35.707 } 00:14:35.707 }, 00:14:35.707 { 00:14:35.707 "method": "nvmf_set_crdt", 00:14:35.707 "params": { 00:14:35.707 "crdt1": 0, 00:14:35.707 "crdt2": 0, 00:14:35.707 "crdt3": 0 00:14:35.707 } 00:14:35.707 }, 00:14:35.707 { 00:14:35.707 "method": "nvmf_create_transport", 00:14:35.707 "params": { 00:14:35.707 "trtype": "TCP", 00:14:35.707 "max_queue_depth": 128, 00:14:35.707 "max_io_qpairs_per_ctrlr": 127, 00:14:35.707 "in_capsule_data_size": 4096, 00:14:35.707 "max_io_size": 131072, 00:14:35.707 "io_unit_size": 131072, 00:14:35.707 "max_aq_depth": 128, 00:14:35.707 "num_shared_buffers": 511, 00:14:35.707 "buf_cache_size": 4294967295, 00:14:35.707 "dif_insert_or_strip": false, 00:14:35.707 "zcopy": false, 00:14:35.707 "c2h_success": false, 00:14:35.707 "sock_priority": 0, 00:14:35.707 "abort_timeout_sec": 1, 00:14:35.707 "ack_timeout": 0 00:14:35.707 } 00:14:35.707 }, 00:14:35.707 { 00:14:35.707 "method": "nvmf_create_subsystem", 00:14:35.707 "params": { 00:14:35.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.707 "allow_any_host": false, 00:14:35.707 "serial_number": "SPDK00000000000001", 00:14:35.707 "model_number": "SPDK bdev Controller", 00:14:35.707 "max_namespaces": 10, 00:14:35.707 "min_cntlid": 1, 00:14:35.707 "max_cntlid": 65519, 00:14:35.707 "ana_reporting": false 00:14:35.707 } 00:14:35.707 }, 00:14:35.707 { 00:14:35.707 "method": "nvmf_subsystem_add_host", 00:14:35.707 "params": { 00:14:35.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.707 "host": "nqn.2016-06.io.spdk:host1", 00:14:35.707 "psk": "/tmp/tmp.rWc4tKrkDB" 00:14:35.707 } 00:14:35.707 }, 00:14:35.707 { 00:14:35.707 "method": "nvmf_subsystem_add_ns", 00:14:35.707 "params": { 00:14:35.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.707 "namespace": { 00:14:35.707 "nsid": 1, 00:14:35.707 "bdev_name": "malloc0", 00:14:35.707 "nguid": "A174E782B01A4896904D55E0900E8C58", 00:14:35.707 "uuid": "a174e782-b01a-4896-904d-55e0900e8c58", 00:14:35.707 "no_auto_visible": false 00:14:35.707 } 00:14:35.707 } 00:14:35.707 }, 00:14:35.707 { 00:14:35.707 "method": "nvmf_subsystem_add_listener", 00:14:35.707 "params": { 00:14:35.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.707 "listen_address": { 00:14:35.707 "trtype": "TCP", 00:14:35.707 "adrfam": "IPv4", 00:14:35.707 "traddr": "10.0.0.2", 00:14:35.707 "trsvcid": "4420" 00:14:35.707 }, 00:14:35.707 "secure_channel": true 00:14:35.707 } 00:14:35.707 } 00:14:35.707 ] 00:14:35.707 } 00:14:35.707 ] 00:14:35.707 }' 00:14:35.707 17:01:51 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:35.966 17:01:51 -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:35.966 "subsystems": [ 00:14:35.966 { 00:14:35.966 "subsystem": "keyring", 00:14:35.966 "config": [] 00:14:35.966 }, 00:14:35.966 { 00:14:35.966 "subsystem": "iobuf", 00:14:35.966 "config": [ 00:14:35.966 { 00:14:35.966 "method": "iobuf_set_options", 00:14:35.966 "params": { 00:14:35.966 "small_pool_count": 8192, 00:14:35.966 "large_pool_count": 1024, 00:14:35.966 "small_bufsize": 8192, 00:14:35.966 "large_bufsize": 135168 00:14:35.966 } 00:14:35.966 } 00:14:35.966 ] 00:14:35.966 }, 00:14:35.966 { 00:14:35.966 "subsystem": "sock", 00:14:35.966 "config": [ 00:14:35.966 { 00:14:35.966 "method": "sock_impl_set_options", 00:14:35.966 "params": { 00:14:35.966 "impl_name": "posix", 00:14:35.966 "recv_buf_size": 2097152, 00:14:35.966 "send_buf_size": 2097152, 00:14:35.966 "enable_recv_pipe": true, 00:14:35.966 "enable_quickack": false, 00:14:35.966 "enable_placement_id": 0, 00:14:35.966 "enable_zerocopy_send_server": true, 00:14:35.966 "enable_zerocopy_send_client": false, 00:14:35.966 "zerocopy_threshold": 0, 00:14:35.966 "tls_version": 0, 00:14:35.966 "enable_ktls": false 00:14:35.966 } 00:14:35.966 }, 00:14:35.966 { 00:14:35.966 "method": "sock_impl_set_options", 00:14:35.966 "params": { 00:14:35.966 "impl_name": "ssl", 00:14:35.966 "recv_buf_size": 4096, 00:14:35.966 "send_buf_size": 4096, 00:14:35.966 "enable_recv_pipe": true, 00:14:35.966 "enable_quickack": false, 00:14:35.966 "enable_placement_id": 0, 00:14:35.966 "enable_zerocopy_send_server": true, 00:14:35.966 "enable_zerocopy_send_client": false, 00:14:35.966 "zerocopy_threshold": 0, 00:14:35.966 "tls_version": 0, 00:14:35.966 "enable_ktls": false 00:14:35.966 } 00:14:35.966 } 00:14:35.966 ] 00:14:35.966 }, 00:14:35.966 { 00:14:35.966 "subsystem": "vmd", 00:14:35.966 "config": [] 00:14:35.966 }, 00:14:35.966 { 00:14:35.966 "subsystem": "accel", 00:14:35.966 "config": [ 00:14:35.966 { 00:14:35.966 "method": "accel_set_options", 00:14:35.966 "params": { 00:14:35.966 "small_cache_size": 128, 00:14:35.966 "large_cache_size": 16, 00:14:35.966 "task_count": 2048, 00:14:35.966 "sequence_count": 2048, 00:14:35.966 "buf_count": 2048 00:14:35.966 } 00:14:35.966 } 00:14:35.966 ] 00:14:35.966 }, 00:14:35.966 { 00:14:35.966 "subsystem": "bdev", 00:14:35.966 "config": [ 00:14:35.966 { 00:14:35.966 "method": "bdev_set_options", 00:14:35.966 "params": { 00:14:35.966 "bdev_io_pool_size": 65535, 00:14:35.966 "bdev_io_cache_size": 256, 00:14:35.966 "bdev_auto_examine": true, 00:14:35.966 "iobuf_small_cache_size": 128, 00:14:35.966 "iobuf_large_cache_size": 16 00:14:35.966 } 00:14:35.966 }, 00:14:35.966 { 00:14:35.966 "method": "bdev_raid_set_options", 00:14:35.966 "params": { 00:14:35.966 "process_window_size_kb": 1024 00:14:35.966 } 00:14:35.966 }, 00:14:35.966 { 00:14:35.966 "method": "bdev_iscsi_set_options", 00:14:35.966 "params": { 00:14:35.966 "timeout_sec": 30 00:14:35.966 } 00:14:35.966 }, 00:14:35.966 { 00:14:35.966 "method": "bdev_nvme_set_options", 00:14:35.966 "params": { 00:14:35.966 "action_on_timeout": "none", 00:14:35.966 "timeout_us": 0, 00:14:35.966 "timeout_admin_us": 0, 00:14:35.966 "keep_alive_timeout_ms": 10000, 00:14:35.966 "arbitration_burst": 0, 00:14:35.966 "low_priority_weight": 0, 00:14:35.966 "medium_priority_weight": 0, 00:14:35.966 "high_priority_weight": 0, 00:14:35.966 "nvme_adminq_poll_period_us": 10000, 00:14:35.966 "nvme_ioq_poll_period_us": 0, 00:14:35.966 "io_queue_requests": 512, 00:14:35.966 "delay_cmd_submit": true, 00:14:35.966 "transport_retry_count": 4, 00:14:35.966 "bdev_retry_count": 3, 00:14:35.966 "transport_ack_timeout": 0, 00:14:35.966 "ctrlr_loss_timeout_sec": 0, 00:14:35.966 "reconnect_delay_sec": 0, 00:14:35.966 "fast_io_fail_timeout_sec": 0, 00:14:35.966 "disable_auto_failback": false, 00:14:35.967 "generate_uuids": false, 00:14:35.967 "transport_tos": 0, 00:14:35.967 "nvme_error_stat": false, 00:14:35.967 "rdma_srq_size": 0, 00:14:35.967 "io_path_stat": false, 00:14:35.967 "allow_accel_sequence": false, 00:14:35.967 "rdma_max_cq_size": 0, 00:14:35.967 "rdma_cm_event_timeout_ms": 0, 00:14:35.967 "dhchap_digests": [ 00:14:35.967 "sha256", 00:14:35.967 "sha384", 00:14:35.967 "sha512" 00:14:35.967 ], 00:14:35.967 "dhchap_dhgroups": [ 00:14:35.967 "null", 00:14:35.967 "ffdhe2048", 00:14:35.967 "ffdhe3072", 00:14:35.967 "ffdhe4096", 00:14:35.967 "ffdhe6144", 00:14:35.967 "ffdhe8192" 00:14:35.967 ] 00:14:35.967 } 00:14:35.967 }, 00:14:35.967 { 00:14:35.967 "method": "bdev_nvme_attach_controller", 00:14:35.967 "params": { 00:14:35.967 "name": "TLSTEST", 00:14:35.967 "trtype": "TCP", 00:14:35.967 "adrfam": "IPv4", 00:14:35.967 "traddr": "10.0.0.2", 00:14:35.967 "trsvcid": "4420", 00:14:35.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.967 "prchk_reftag": false, 00:14:35.967 "prchk_guard": false, 00:14:35.967 "ctrlr_loss_timeout_sec": 0, 00:14:35.967 "reconnect_delay_sec": 0, 00:14:35.967 "fast_io_fail_timeout_sec": 0, 00:14:35.967 "psk": "/tmp/tmp.rWc4tKrkDB", 00:14:35.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:35.967 "hdgst": false, 00:14:35.967 "ddgst": false 00:14:35.967 } 00:14:35.967 }, 00:14:35.967 { 00:14:35.967 "method": "bdev_nvme_set_hotplug", 00:14:35.967 "params": { 00:14:35.967 "period_us": 100000, 00:14:35.967 "enable": false 00:14:35.967 } 00:14:35.967 }, 00:14:35.967 { 00:14:35.967 "method": "bdev_wait_for_examine" 00:14:35.967 } 00:14:35.967 ] 00:14:35.967 }, 00:14:35.967 { 00:14:35.967 "subsystem": "nbd", 00:14:35.967 "config": [] 00:14:35.967 } 00:14:35.967 ] 00:14:35.967 }' 00:14:35.967 17:01:51 -- target/tls.sh@199 -- # killprocess 1691006 00:14:35.967 17:01:51 -- common/autotest_common.sh@936 -- # '[' -z 1691006 ']' 00:14:35.967 17:01:51 -- common/autotest_common.sh@940 -- # kill -0 1691006 00:14:35.967 17:01:51 -- common/autotest_common.sh@941 -- # uname 00:14:35.967 17:01:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:35.967 17:01:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1691006 00:14:35.967 17:01:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:35.967 17:01:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:35.967 17:01:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1691006' 00:14:35.967 killing process with pid 1691006 00:14:35.967 17:01:51 -- common/autotest_common.sh@955 -- # kill 1691006 00:14:35.967 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.967 00:14:35.967 Latency(us) 00:14:35.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.967 =================================================================================================================== 00:14:35.967 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:35.967 [2024-04-18 17:01:51.446074] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:35.967 17:01:51 -- common/autotest_common.sh@960 -- # wait 1691006 00:14:36.226 17:01:51 -- target/tls.sh@200 -- # killprocess 1690712 00:14:36.226 17:01:51 -- common/autotest_common.sh@936 -- # '[' -z 1690712 ']' 00:14:36.226 17:01:51 -- common/autotest_common.sh@940 -- # kill -0 1690712 00:14:36.226 17:01:51 -- common/autotest_common.sh@941 -- # uname 00:14:36.226 17:01:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:36.226 17:01:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1690712 00:14:36.226 17:01:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:36.226 17:01:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:36.226 17:01:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1690712' 00:14:36.226 killing process with pid 1690712 00:14:36.226 17:01:51 -- common/autotest_common.sh@955 -- # kill 1690712 00:14:36.226 [2024-04-18 17:01:51.736752] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:36.226 17:01:51 -- common/autotest_common.sh@960 -- # wait 1690712 00:14:36.485 17:01:52 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:36.485 17:01:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:36.485 17:01:52 -- target/tls.sh@203 -- # echo '{ 00:14:36.485 "subsystems": [ 00:14:36.485 { 00:14:36.485 "subsystem": "keyring", 00:14:36.485 "config": [] 00:14:36.485 }, 00:14:36.485 { 00:14:36.485 "subsystem": "iobuf", 00:14:36.485 "config": [ 00:14:36.485 { 00:14:36.485 "method": "iobuf_set_options", 00:14:36.485 "params": { 00:14:36.485 "small_pool_count": 8192, 00:14:36.485 "large_pool_count": 1024, 00:14:36.485 "small_bufsize": 8192, 00:14:36.485 "large_bufsize": 135168 00:14:36.485 } 00:14:36.485 } 00:14:36.485 ] 00:14:36.485 }, 00:14:36.485 { 00:14:36.485 "subsystem": "sock", 00:14:36.485 "config": [ 00:14:36.485 { 00:14:36.485 "method": "sock_impl_set_options", 00:14:36.485 "params": { 00:14:36.485 "impl_name": "posix", 00:14:36.485 "recv_buf_size": 2097152, 00:14:36.485 "send_buf_size": 2097152, 00:14:36.485 "enable_recv_pipe": true, 00:14:36.485 "enable_quickack": false, 00:14:36.485 "enable_placement_id": 0, 00:14:36.485 "enable_zerocopy_send_server": true, 00:14:36.485 "enable_zerocopy_send_client": false, 00:14:36.485 "zerocopy_threshold": 0, 00:14:36.485 "tls_version": 0, 00:14:36.485 "enable_ktls": false 00:14:36.485 } 00:14:36.485 }, 00:14:36.485 { 00:14:36.485 "method": "sock_impl_set_options", 00:14:36.485 "params": { 00:14:36.485 "impl_name": "ssl", 00:14:36.485 "recv_buf_size": 4096, 00:14:36.485 "send_buf_size": 4096, 00:14:36.485 "enable_recv_pipe": true, 00:14:36.485 "enable_quickack": false, 00:14:36.485 "enable_placement_id": 0, 00:14:36.485 "enable_zerocopy_send_server": true, 00:14:36.485 "enable_zerocopy_send_client": false, 00:14:36.485 "zerocopy_threshold": 0, 00:14:36.485 "tls_version": 0, 00:14:36.485 "enable_ktls": false 00:14:36.485 } 00:14:36.485 } 00:14:36.485 ] 00:14:36.485 }, 00:14:36.485 { 00:14:36.485 "subsystem": "vmd", 00:14:36.485 "config": [] 00:14:36.485 }, 00:14:36.485 { 00:14:36.485 "subsystem": "accel", 00:14:36.485 "config": [ 00:14:36.485 { 00:14:36.485 "method": "accel_set_options", 00:14:36.485 "params": { 00:14:36.485 "small_cache_size": 128, 00:14:36.485 "large_cache_size": 16, 00:14:36.485 "task_count": 2048, 00:14:36.485 "sequence_count": 2048, 00:14:36.485 "buf_count": 2048 00:14:36.485 } 00:14:36.485 } 00:14:36.485 ] 00:14:36.485 }, 00:14:36.485 { 00:14:36.485 "subsystem": "bdev", 00:14:36.485 "config": [ 00:14:36.485 { 00:14:36.485 "method": "bdev_set_options", 00:14:36.485 "params": { 00:14:36.485 "bdev_io_pool_size": 65535, 00:14:36.485 "bdev_io_cache_size": 256, 00:14:36.485 "bdev_auto_examine": true, 00:14:36.485 "iobuf_small_cache_size": 128, 00:14:36.485 "iobuf_large_cache_size": 16 00:14:36.485 } 00:14:36.485 }, 00:14:36.485 { 00:14:36.485 "method": "bdev_raid_set_options", 00:14:36.485 "params": { 00:14:36.485 "process_window_size_kb": 1024 00:14:36.485 } 00:14:36.485 }, 00:14:36.485 { 00:14:36.485 "method": "bdev_iscsi_set_options", 00:14:36.485 "params": { 00:14:36.485 "timeout_sec": 30 00:14:36.485 } 00:14:36.485 }, 00:14:36.485 { 00:14:36.485 "method": "bdev_nvme_set_options", 00:14:36.485 "params": { 00:14:36.485 "action_on_timeout": "none", 00:14:36.485 "timeout_us": 0, 00:14:36.485 "timeout_admin_us": 0, 00:14:36.485 "keep_alive_timeout_ms": 10000, 00:14:36.485 "arbitration_burst": 0, 00:14:36.485 "low_priority_weight": 0, 00:14:36.485 "medium_priority_weight": 0, 00:14:36.485 "high_priority_weight": 0, 00:14:36.486 "nvme_adminq_poll_period_us": 10000, 00:14:36.486 "nvme_ioq_poll_period_us": 0, 00:14:36.486 "io_queue_requests": 0, 00:14:36.486 "delay_cmd_submit": true, 00:14:36.486 "transport_retry_count": 4, 00:14:36.486 "bdev_retry_count": 3, 00:14:36.486 "transport_ack_timeout": 0, 00:14:36.486 "ctrlr_loss_timeout_sec": 0, 00:14:36.486 "reconnect_delay_sec": 0, 00:14:36.486 "fast_io_fail_timeout_sec": 0, 00:14:36.486 "disable_auto_failback": false, 00:14:36.486 "generate_uuids": false, 00:14:36.486 "transport_tos": 0, 00:14:36.486 "nvme_error_stat": false, 00:14:36.486 "rdma_srq_size": 0, 00:14:36.486 "io_path_stat": false, 00:14:36.486 "allow_accel_sequence": false, 00:14:36.486 "rdma_max_cq_size": 0, 00:14:36.486 "rdma_cm_event_timeout_ms": 0, 00:14:36.486 "dhchap_digests": [ 00:14:36.486 "sha256", 00:14:36.486 "sha384", 00:14:36.486 "sha512" 00:14:36.486 ], 00:14:36.486 "dhchap_dhgroups": [ 00:14:36.486 "null", 00:14:36.486 "ffdhe2048", 00:14:36.486 "ffdhe3072", 00:14:36.486 "ffdhe4096", 00:14:36.486 "ffdhe6144", 00:14:36.486 "ffdhe8192" 00:14:36.486 ] 00:14:36.486 } 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "method": "bdev_nvme_set_hotplug", 00:14:36.486 "params": { 00:14:36.486 17:01:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:36.486 "period_us": 100000, 00:14:36.486 "enable": false 00:14:36.486 } 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "method": "bdev_malloc_create", 00:14:36.486 "params": { 00:14:36.486 "name": "malloc0", 00:14:36.486 "num_blocks": 8192, 00:14:36.486 "block_size": 4096, 00:14:36.486 "physical_block_size": 4096, 00:14:36.486 "uuid": "a174e782-b01a-4896-904d-55e0900e8c58", 00:14:36.486 "optimal_io_boundary": 0 00:14:36.486 } 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "method": "bdev_wait_for_examine" 00:14:36.486 } 00:14:36.486 ] 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "subsystem": "nbd", 00:14:36.486 "config": [] 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "subsystem": "scheduler", 00:14:36.486 "config": [ 00:14:36.486 { 00:14:36.486 "method": "framework_set_scheduler", 00:14:36.486 "params": { 00:14:36.486 "name": "static" 00:14:36.486 } 00:14:36.486 } 00:14:36.486 ] 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "subsystem": "nvmf", 00:14:36.486 "config": [ 00:14:36.486 { 00:14:36.486 "method": "nvmf_set_config", 00:14:36.486 "params": { 00:14:36.486 "discovery_filter": "match_any", 00:14:36.486 "admin_cmd_passthru": { 00:14:36.486 "identify_ctrlr": false 00:14:36.486 } 00:14:36.486 } 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "method": "nvmf_set_max_subsystems", 00:14:36.486 "params": { 00:14:36.486 "max_subsystems": 1024 00:14:36.486 } 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "method": "nvmf_set_crdt", 00:14:36.486 "params": { 00:14:36.486 "crdt1": 0, 00:14:36.486 "crdt2": 0, 00:14:36.486 "crdt3": 0 00:14:36.486 } 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "method": "nvmf_create_transport", 00:14:36.486 "params": { 00:14:36.486 "trtype": "TCP", 00:14:36.486 "max_queue_depth": 128, 00:14:36.486 "max_io_qpairs_per_ctrlr": 127, 00:14:36.486 "in_capsule_data_size": 4096, 00:14:36.486 "max_io_size": 131072, 00:14:36.486 "io_unit_size": 131072, 00:14:36.486 "max_aq_depth": 128, 00:14:36.486 "num_shared_buffers": 511, 00:14:36.486 "buf_cache_size": 4294967295, 00:14:36.486 "dif_insert_or_strip": false, 00:14:36.486 "zcopy": false, 00:14:36.486 "c2h_success": false, 00:14:36.486 "sock_priority": 0, 00:14:36.486 "abort_timeout_sec": 1, 00:14:36.486 "ack_timeout": 0 00:14:36.486 } 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "method": "nvmf_create_subsystem", 00:14:36.486 "params": { 00:14:36.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.486 "allow_any_host": false, 00:14:36.486 "serial_number": "SPDK00000000000001", 00:14:36.486 "model_number": "SPDK bdev Controller", 00:14:36.486 "max_namespaces": 10, 00:14:36.486 "min_cntlid": 1, 00:14:36.486 "max_cntlid": 65519, 00:14:36.486 "ana_reporting": false 00:14:36.486 } 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "method": "nvmf_subsystem_add_host", 00:14:36.486 "params": { 00:14:36.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.486 "host": "nqn.2016-06.io.spdk:host1", 00:14:36.486 "psk": "/tmp/tmp.rWc4tKrkDB" 00:14:36.486 } 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "method": "nvmf_subsystem_add_ns", 00:14:36.486 "params": { 00:14:36.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.486 "namespace": { 00:14:36.486 "nsid": 1, 00:14:36.486 "bdev_name": "malloc0", 00:14:36.486 "nguid": "A174E782B01A4896904D55E0900E8C58", 00:14:36.486 "uuid": "a174e782-b01a-4896-904d-55e0900e8c58", 00:14:36.486 "no_auto_visible": false 00:14:36.486 } 00:14:36.486 } 00:14:36.486 }, 00:14:36.486 { 00:14:36.486 "method": "nvmf_subsystem_add_listener", 00:14:36.486 "params": { 00:14:36.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.486 "listen_address": { 00:14:36.486 "trtype": "TCP", 00:14:36.486 "adrfam": "IPv4", 00:14:36.486 "traddr": "10.0.0.2", 00:14:36.486 "trsvcid": "4420" 00:14:36.486 }, 00:14:36.486 "secure_channel": true 00:14:36.486 } 00:14:36.486 } 00:14:36.486 ] 00:14:36.486 } 00:14:36.486 ] 00:14:36.486 }' 00:14:36.486 17:01:52 -- common/autotest_common.sh@10 -- # set +x 00:14:36.486 17:01:52 -- nvmf/common.sh@470 -- # nvmfpid=1691280 00:14:36.486 17:01:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:36.486 17:01:52 -- nvmf/common.sh@471 -- # waitforlisten 1691280 00:14:36.486 17:01:52 -- common/autotest_common.sh@817 -- # '[' -z 1691280 ']' 00:14:36.486 17:01:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.486 17:01:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:36.486 17:01:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.486 17:01:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:36.486 17:01:52 -- common/autotest_common.sh@10 -- # set +x 00:14:36.486 [2024-04-18 17:01:52.093425] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:36.486 [2024-04-18 17:01:52.093503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.486 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.486 [2024-04-18 17:01:52.157537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.746 [2024-04-18 17:01:52.264622] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.746 [2024-04-18 17:01:52.264703] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.746 [2024-04-18 17:01:52.264717] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.746 [2024-04-18 17:01:52.264728] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.746 [2024-04-18 17:01:52.264738] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.746 [2024-04-18 17:01:52.264850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.004 [2024-04-18 17:01:52.497208] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.004 [2024-04-18 17:01:52.513176] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:37.004 [2024-04-18 17:01:52.529234] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:37.004 [2024-04-18 17:01:52.537558] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.572 17:01:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:37.572 17:01:53 -- common/autotest_common.sh@850 -- # return 0 00:14:37.572 17:01:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:37.572 17:01:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:37.572 17:01:53 -- common/autotest_common.sh@10 -- # set +x 00:14:37.572 17:01:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.572 17:01:53 -- target/tls.sh@207 -- # bdevperf_pid=1691432 00:14:37.572 17:01:53 -- target/tls.sh@208 -- # waitforlisten 1691432 /var/tmp/bdevperf.sock 00:14:37.572 17:01:53 -- common/autotest_common.sh@817 -- # '[' -z 1691432 ']' 00:14:37.572 17:01:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.572 17:01:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:37.572 17:01:53 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:37.572 17:01:53 -- target/tls.sh@204 -- # echo '{ 00:14:37.572 "subsystems": [ 00:14:37.572 { 00:14:37.572 "subsystem": "keyring", 00:14:37.572 "config": [] 00:14:37.572 }, 00:14:37.572 { 00:14:37.572 "subsystem": "iobuf", 00:14:37.572 "config": [ 00:14:37.572 { 00:14:37.572 "method": "iobuf_set_options", 00:14:37.572 "params": { 00:14:37.572 "small_pool_count": 8192, 00:14:37.572 "large_pool_count": 1024, 00:14:37.572 "small_bufsize": 8192, 00:14:37.572 "large_bufsize": 135168 00:14:37.572 } 00:14:37.572 } 00:14:37.572 ] 00:14:37.572 }, 00:14:37.572 { 00:14:37.572 "subsystem": "sock", 00:14:37.572 "config": [ 00:14:37.572 { 00:14:37.572 "method": "sock_impl_set_options", 00:14:37.572 "params": { 00:14:37.572 "impl_name": "posix", 00:14:37.572 "recv_buf_size": 2097152, 00:14:37.572 "send_buf_size": 2097152, 00:14:37.572 "enable_recv_pipe": true, 00:14:37.572 "enable_quickack": false, 00:14:37.572 "enable_placement_id": 0, 00:14:37.572 "enable_zerocopy_send_server": true, 00:14:37.572 "enable_zerocopy_send_client": false, 00:14:37.572 "zerocopy_threshold": 0, 00:14:37.572 "tls_version": 0, 00:14:37.572 "enable_ktls": false 00:14:37.572 } 00:14:37.572 }, 00:14:37.572 { 00:14:37.572 "method": "sock_impl_set_options", 00:14:37.572 "params": { 00:14:37.572 "impl_name": "ssl", 00:14:37.572 "recv_buf_size": 4096, 00:14:37.572 "send_buf_size": 4096, 00:14:37.572 "enable_recv_pipe": true, 00:14:37.572 "enable_quickack": false, 00:14:37.572 "enable_placement_id": 0, 00:14:37.572 "enable_zerocopy_send_server": true, 00:14:37.572 "enable_zerocopy_send_client": false, 00:14:37.572 "zerocopy_threshold": 0, 00:14:37.572 "tls_version": 0, 00:14:37.572 "enable_ktls": false 00:14:37.573 } 00:14:37.573 } 00:14:37.573 ] 00:14:37.573 }, 00:14:37.573 { 00:14:37.573 "subsystem": "vmd", 00:14:37.573 "config": [] 00:14:37.573 }, 00:14:37.573 { 00:14:37.573 "subsystem": "accel", 00:14:37.573 "config": [ 00:14:37.573 { 00:14:37.573 "method": "accel_set_options", 00:14:37.573 "params": { 00:14:37.573 "small_cache_size": 128, 00:14:37.573 "large_cache_size": 16, 00:14:37.573 "task_count": 2048, 00:14:37.573 "sequence_count": 2048, 00:14:37.573 "buf_count": 2048 00:14:37.573 } 00:14:37.573 } 00:14:37.573 ] 00:14:37.573 }, 00:14:37.573 { 00:14:37.573 "subsystem": "bdev", 00:14:37.573 "config": [ 00:14:37.573 { 00:14:37.573 "method": "bdev_set_options", 00:14:37.573 "params": { 00:14:37.573 "bdev_io_pool_size": 65535, 00:14:37.573 "bdev_io_cache_size": 256, 00:14:37.573 "bdev_auto_examine": true, 00:14:37.573 "iobuf_small_cache_size": 128, 00:14:37.573 "iobuf_large_cache_size": 16 00:14:37.573 } 00:14:37.573 }, 00:14:37.573 { 00:14:37.573 "method": "bdev_raid_set_options", 00:14:37.573 "params": { 00:14:37.573 "process_window_size_kb": 1024 00:14:37.573 } 00:14:37.573 }, 00:14:37.573 { 00:14:37.573 "method": "bdev_iscsi_set_options", 00:14:37.573 "params": { 00:14:37.573 "timeout_sec": 30 00:14:37.573 } 00:14:37.573 }, 00:14:37.573 { 00:14:37.573 "method": "bdev_nvme_set_options", 00:14:37.573 "params": { 00:14:37.573 "action_on_timeout": "none", 00:14:37.573 "timeout_us": 0, 00:14:37.573 "timeout_admin_us": 0, 00:14:37.573 "keep_alive_timeout_ms": 10000, 00:14:37.573 "arbitration_burst": 0, 00:14:37.573 "low_priority_weight": 0, 00:14:37.573 "medium_priority_weight": 0, 00:14:37.573 "high_priority_weight": 0, 00:14:37.573 "nvme_adminq_poll_period_us": 10000, 00:14:37.573 "nvme_ioq_poll_period_us": 0, 00:14:37.573 "io_queue_requests": 512, 00:14:37.573 "delay_cmd_submit": true, 00:14:37.573 "transport_retry_count": 4, 00:14:37.573 "bdev_retry_count": 3, 00:14:37.573 "transport_ack_timeout": 0, 00:14:37.573 "ctrlr_loss_timeout_sec": 0, 00:14:37.573 "reconnect_delay_sec": 0, 00:14:37.573 "fast_io_fail_timeout_sec": 0, 00:14:37.573 "disable_auto_failback": false, 00:14:37.573 "generate_uuids": false, 00:14:37.573 "transport_tos": 0, 00:14:37.573 "nvme_error_stat": false, 00:14:37.573 "rdma_srq_size": 0, 00:14:37.573 "io_path_stat": false, 00:14:37.573 "allow_accel_sequence": false, 00:14:37.573 "rdma_max_cq_size": 0, 00:14:37.573 "rdma_cm_event_timeout_ms": 0, 00:14:37.573 "dhchap_digests": [ 00:14:37.573 "sha256", 00:14:37.573 "sha384", 00:14:37.573 "sha512" 00:14:37.573 ], 00:14:37.573 "dhchap_dhgroups": [ 00:14:37.573 "null", 00:14:37.573 "ffdhe2048", 00:14:37.573 "ffdhe3072", 00:14:37.573 "ffdhe4096", 00:14:37.573 "ffdhe6144", 00:14:37.573 "ffdhe8192" 00:14:37.573 ] 00:14:37.573 } 00:14:37.573 }, 00:14:37.573 { 00:14:37.573 "method": "bdev_nvme_attach_controller", 00:14:37.573 "params": { 00:14:37.573 "name": "TLSTEST", 00:14:37.573 "trtype": "TCP", 00:14:37.573 "adrfam": "IPv4", 00:14:37.573 "traddr": "10.0.0.2", 00:14:37.573 "trsvcid": "4420", 00:14:37.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.573 "prchk_reftag": false, 00:14:37.573 "prchk_guard": false, 00:14:37.573 "ctrlr_loss_timeout_sec": 0, 00:14:37.573 "reconnect_delay_sec": 0, 00:14:37.573 "fast_io_fail_timeout_sec": 0, 00:14:37.573 "psk": "/tmp/tmp.rWc4tKrkDB", 00:14:37.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:37.573 "hdgst": false, 00:14:37.573 "ddgst": false 00:14:37.573 } 00:14:37.573 }, 00:14:37.573 { 00:14:37.573 "method": "bdev_nvme_set_hotplug", 00:14:37.573 "params": { 00:14:37.573 "period_us": 100000, 00:14:37.573 "enable": false 00:14:37.573 } 00:14:37.573 }, 00:14:37.573 { 00:14:37.573 "method": "bdev_wait_for_examine" 00:14:37.573 } 00:14:37.573 ] 00:14:37.573 }, 00:14:37.573 { 00:14:37.573 "subsystem": "nbd", 00:14:37.573 "config": [] 00:14:37.573 } 00:14:37.573 ] 00:14:37.573 }' 00:14:37.573 17:01:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.573 17:01:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:37.573 17:01:53 -- common/autotest_common.sh@10 -- # set +x 00:14:37.573 [2024-04-18 17:01:53.146300] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:37.573 [2024-04-18 17:01:53.146418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691432 ] 00:14:37.573 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.573 [2024-04-18 17:01:53.210873] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.833 [2024-04-18 17:01:53.319874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.833 [2024-04-18 17:01:53.481105] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:37.833 [2024-04-18 17:01:53.481243] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:38.772 17:01:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:38.772 17:01:54 -- common/autotest_common.sh@850 -- # return 0 00:14:38.772 17:01:54 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:38.772 Running I/O for 10 seconds... 00:14:48.813 00:14:48.813 Latency(us) 00:14:48.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.813 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:48.813 Verification LBA range: start 0x0 length 0x2000 00:14:48.813 TLSTESTn1 : 10.05 2630.62 10.28 0.00 0.00 48524.04 6941.96 42719.76 00:14:48.813 =================================================================================================================== 00:14:48.813 Total : 2630.62 10.28 0.00 0.00 48524.04 6941.96 42719.76 00:14:48.813 0 00:14:48.813 17:02:04 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:48.813 17:02:04 -- target/tls.sh@214 -- # killprocess 1691432 00:14:48.813 17:02:04 -- common/autotest_common.sh@936 -- # '[' -z 1691432 ']' 00:14:48.813 17:02:04 -- common/autotest_common.sh@940 -- # kill -0 1691432 00:14:48.813 17:02:04 -- common/autotest_common.sh@941 -- # uname 00:14:48.813 17:02:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.813 17:02:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1691432 00:14:48.813 17:02:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:48.813 17:02:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:48.813 17:02:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1691432' 00:14:48.813 killing process with pid 1691432 00:14:48.813 17:02:04 -- common/autotest_common.sh@955 -- # kill 1691432 00:14:48.813 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.813 00:14:48.814 Latency(us) 00:14:48.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.814 =================================================================================================================== 00:14:48.814 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.814 [2024-04-18 17:02:04.348566] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:48.814 17:02:04 -- common/autotest_common.sh@960 -- # wait 1691432 00:14:49.072 17:02:04 -- target/tls.sh@215 -- # killprocess 1691280 00:14:49.072 17:02:04 -- common/autotest_common.sh@936 -- # '[' -z 1691280 ']' 00:14:49.072 17:02:04 -- common/autotest_common.sh@940 -- # kill -0 1691280 00:14:49.072 17:02:04 -- common/autotest_common.sh@941 -- # uname 00:14:49.072 17:02:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:49.072 17:02:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1691280 00:14:49.072 17:02:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:49.072 17:02:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:49.072 17:02:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1691280' 00:14:49.072 killing process with pid 1691280 00:14:49.072 17:02:04 -- common/autotest_common.sh@955 -- # kill 1691280 00:14:49.072 [2024-04-18 17:02:04.647542] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:49.072 17:02:04 -- common/autotest_common.sh@960 -- # wait 1691280 00:14:49.331 17:02:04 -- target/tls.sh@218 -- # nvmfappstart 00:14:49.331 17:02:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:49.331 17:02:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:49.331 17:02:04 -- common/autotest_common.sh@10 -- # set +x 00:14:49.331 17:02:04 -- nvmf/common.sh@470 -- # nvmfpid=1692774 00:14:49.331 17:02:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:49.331 17:02:04 -- nvmf/common.sh@471 -- # waitforlisten 1692774 00:14:49.331 17:02:04 -- common/autotest_common.sh@817 -- # '[' -z 1692774 ']' 00:14:49.331 17:02:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.331 17:02:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:49.331 17:02:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.331 17:02:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:49.331 17:02:04 -- common/autotest_common.sh@10 -- # set +x 00:14:49.331 [2024-04-18 17:02:04.991170] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:49.331 [2024-04-18 17:02:04.991258] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.331 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.589 [2024-04-18 17:02:05.054205] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.589 [2024-04-18 17:02:05.157013] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.589 [2024-04-18 17:02:05.157071] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.589 [2024-04-18 17:02:05.157095] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.589 [2024-04-18 17:02:05.157106] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.589 [2024-04-18 17:02:05.157116] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.589 [2024-04-18 17:02:05.157157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.589 17:02:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:49.589 17:02:05 -- common/autotest_common.sh@850 -- # return 0 00:14:49.589 17:02:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:49.589 17:02:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:49.589 17:02:05 -- common/autotest_common.sh@10 -- # set +x 00:14:49.589 17:02:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.589 17:02:05 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.rWc4tKrkDB 00:14:49.589 17:02:05 -- target/tls.sh@49 -- # local key=/tmp/tmp.rWc4tKrkDB 00:14:49.589 17:02:05 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:49.849 [2024-04-18 17:02:05.514972] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.849 17:02:05 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:50.109 17:02:05 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:50.368 [2024-04-18 17:02:05.996251] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:50.368 [2024-04-18 17:02:05.996542] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.368 17:02:06 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:50.627 malloc0 00:14:50.627 17:02:06 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:50.885 17:02:06 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rWc4tKrkDB 00:14:51.143 [2024-04-18 17:02:06.726399] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:51.143 17:02:06 -- target/tls.sh@222 -- # bdevperf_pid=1693056 00:14:51.143 17:02:06 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:51.143 17:02:06 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:51.143 17:02:06 -- target/tls.sh@225 -- # waitforlisten 1693056 /var/tmp/bdevperf.sock 00:14:51.143 17:02:06 -- common/autotest_common.sh@817 -- # '[' -z 1693056 ']' 00:14:51.143 17:02:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.143 17:02:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:51.143 17:02:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.143 17:02:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:51.143 17:02:06 -- common/autotest_common.sh@10 -- # set +x 00:14:51.143 [2024-04-18 17:02:06.781906] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:51.143 [2024-04-18 17:02:06.781992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693056 ] 00:14:51.143 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.143 [2024-04-18 17:02:06.842046] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.401 [2024-04-18 17:02:06.955801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.338 17:02:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:52.338 17:02:07 -- common/autotest_common.sh@850 -- # return 0 00:14:52.338 17:02:07 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rWc4tKrkDB 00:14:52.338 17:02:07 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:52.597 [2024-04-18 17:02:08.212587] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.597 nvme0n1 00:14:52.857 17:02:08 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:52.857 Running I/O for 1 seconds... 00:14:53.797 00:14:53.797 Latency(us) 00:14:53.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.798 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:53.798 Verification LBA range: start 0x0 length 0x2000 00:14:53.798 nvme0n1 : 1.03 3355.74 13.11 0.00 0.00 37671.20 7912.87 52817.16 00:14:53.798 =================================================================================================================== 00:14:53.798 Total : 3355.74 13.11 0.00 0.00 37671.20 7912.87 52817.16 00:14:53.798 0 00:14:53.798 17:02:09 -- target/tls.sh@234 -- # killprocess 1693056 00:14:53.798 17:02:09 -- common/autotest_common.sh@936 -- # '[' -z 1693056 ']' 00:14:53.798 17:02:09 -- common/autotest_common.sh@940 -- # kill -0 1693056 00:14:53.798 17:02:09 -- common/autotest_common.sh@941 -- # uname 00:14:53.798 17:02:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:53.798 17:02:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1693056 00:14:53.798 17:02:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:53.798 17:02:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:53.798 17:02:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1693056' 00:14:53.798 killing process with pid 1693056 00:14:53.798 17:02:09 -- common/autotest_common.sh@955 -- # kill 1693056 00:14:53.798 Received shutdown signal, test time was about 1.000000 seconds 00:14:53.798 00:14:53.798 Latency(us) 00:14:53.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.798 =================================================================================================================== 00:14:53.798 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:53.798 17:02:09 -- common/autotest_common.sh@960 -- # wait 1693056 00:14:54.056 17:02:09 -- target/tls.sh@235 -- # killprocess 1692774 00:14:54.056 17:02:09 -- common/autotest_common.sh@936 -- # '[' -z 1692774 ']' 00:14:54.056 17:02:09 -- common/autotest_common.sh@940 -- # kill -0 1692774 00:14:54.056 17:02:09 -- common/autotest_common.sh@941 -- # uname 00:14:54.056 17:02:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:54.056 17:02:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1692774 00:14:54.056 17:02:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:54.056 17:02:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:54.056 17:02:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1692774' 00:14:54.056 killing process with pid 1692774 00:14:54.056 17:02:09 -- common/autotest_common.sh@955 -- # kill 1692774 00:14:54.056 [2024-04-18 17:02:09.753966] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:54.056 17:02:09 -- common/autotest_common.sh@960 -- # wait 1692774 00:14:54.624 17:02:10 -- target/tls.sh@238 -- # nvmfappstart 00:14:54.624 17:02:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:54.624 17:02:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:54.624 17:02:10 -- common/autotest_common.sh@10 -- # set +x 00:14:54.624 17:02:10 -- nvmf/common.sh@470 -- # nvmfpid=1693466 00:14:54.624 17:02:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:54.624 17:02:10 -- nvmf/common.sh@471 -- # waitforlisten 1693466 00:14:54.624 17:02:10 -- common/autotest_common.sh@817 -- # '[' -z 1693466 ']' 00:14:54.624 17:02:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.624 17:02:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:54.624 17:02:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.624 17:02:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:54.624 17:02:10 -- common/autotest_common.sh@10 -- # set +x 00:14:54.624 [2024-04-18 17:02:10.087023] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:54.624 [2024-04-18 17:02:10.087102] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.624 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.624 [2024-04-18 17:02:10.152313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.624 [2024-04-18 17:02:10.261451] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.624 [2024-04-18 17:02:10.261518] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.624 [2024-04-18 17:02:10.261532] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.624 [2024-04-18 17:02:10.261543] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.624 [2024-04-18 17:02:10.261553] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.624 [2024-04-18 17:02:10.261580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.883 17:02:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:54.883 17:02:10 -- common/autotest_common.sh@850 -- # return 0 00:14:54.883 17:02:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:54.883 17:02:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:54.883 17:02:10 -- common/autotest_common.sh@10 -- # set +x 00:14:54.883 17:02:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.883 17:02:10 -- target/tls.sh@239 -- # rpc_cmd 00:14:54.883 17:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.883 17:02:10 -- common/autotest_common.sh@10 -- # set +x 00:14:54.883 [2024-04-18 17:02:10.414619] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.883 malloc0 00:14:54.883 [2024-04-18 17:02:10.447321] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:54.883 [2024-04-18 17:02:10.447614] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.883 17:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.883 17:02:10 -- target/tls.sh@252 -- # bdevperf_pid=1693485 00:14:54.883 17:02:10 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:54.883 17:02:10 -- target/tls.sh@254 -- # waitforlisten 1693485 /var/tmp/bdevperf.sock 00:14:54.883 17:02:10 -- common/autotest_common.sh@817 -- # '[' -z 1693485 ']' 00:14:54.883 17:02:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.883 17:02:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:54.883 17:02:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.883 17:02:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:54.883 17:02:10 -- common/autotest_common.sh@10 -- # set +x 00:14:54.883 [2024-04-18 17:02:10.516700] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:54.884 [2024-04-18 17:02:10.516775] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693485 ] 00:14:54.884 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.884 [2024-04-18 17:02:10.577952] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.141 [2024-04-18 17:02:10.694853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.141 17:02:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:55.141 17:02:10 -- common/autotest_common.sh@850 -- # return 0 00:14:55.141 17:02:10 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rWc4tKrkDB 00:14:55.399 17:02:11 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:55.657 [2024-04-18 17:02:11.292024] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:55.916 nvme0n1 00:14:55.916 17:02:11 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:55.916 Running I/O for 1 seconds... 00:14:56.855 00:14:56.855 Latency(us) 00:14:56.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.855 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.855 Verification LBA range: start 0x0 length 0x2000 00:14:56.855 nvme0n1 : 1.02 3439.22 13.43 0.00 0.00 36813.74 10097.40 52428.80 00:14:56.855 =================================================================================================================== 00:14:56.855 Total : 3439.22 13.43 0.00 0.00 36813.74 10097.40 52428.80 00:14:56.855 0 00:14:56.855 17:02:12 -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:56.855 17:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.855 17:02:12 -- common/autotest_common.sh@10 -- # set +x 00:14:57.114 17:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:57.114 17:02:12 -- target/tls.sh@263 -- # tgtcfg='{ 00:14:57.114 "subsystems": [ 00:14:57.114 { 00:14:57.114 "subsystem": "keyring", 00:14:57.114 "config": [ 00:14:57.114 { 00:14:57.114 "method": "keyring_file_add_key", 00:14:57.114 "params": { 00:14:57.114 "name": "key0", 00:14:57.114 "path": "/tmp/tmp.rWc4tKrkDB" 00:14:57.114 } 00:14:57.114 } 00:14:57.114 ] 00:14:57.114 }, 00:14:57.114 { 00:14:57.114 "subsystem": "iobuf", 00:14:57.114 "config": [ 00:14:57.114 { 00:14:57.114 "method": "iobuf_set_options", 00:14:57.114 "params": { 00:14:57.114 "small_pool_count": 8192, 00:14:57.114 "large_pool_count": 1024, 00:14:57.114 "small_bufsize": 8192, 00:14:57.114 "large_bufsize": 135168 00:14:57.114 } 00:14:57.114 } 00:14:57.114 ] 00:14:57.114 }, 00:14:57.114 { 00:14:57.114 "subsystem": "sock", 00:14:57.114 "config": [ 00:14:57.114 { 00:14:57.114 "method": "sock_impl_set_options", 00:14:57.114 "params": { 00:14:57.114 "impl_name": "posix", 00:14:57.114 "recv_buf_size": 2097152, 00:14:57.114 "send_buf_size": 2097152, 00:14:57.114 "enable_recv_pipe": true, 00:14:57.114 "enable_quickack": false, 00:14:57.114 "enable_placement_id": 0, 00:14:57.114 "enable_zerocopy_send_server": true, 00:14:57.114 "enable_zerocopy_send_client": false, 00:14:57.114 "zerocopy_threshold": 0, 00:14:57.114 "tls_version": 0, 00:14:57.114 "enable_ktls": false 00:14:57.114 } 00:14:57.114 }, 00:14:57.114 { 00:14:57.114 "method": "sock_impl_set_options", 00:14:57.114 "params": { 00:14:57.114 "impl_name": "ssl", 00:14:57.114 "recv_buf_size": 4096, 00:14:57.114 "send_buf_size": 4096, 00:14:57.114 "enable_recv_pipe": true, 00:14:57.114 "enable_quickack": false, 00:14:57.114 "enable_placement_id": 0, 00:14:57.114 "enable_zerocopy_send_server": true, 00:14:57.114 "enable_zerocopy_send_client": false, 00:14:57.114 "zerocopy_threshold": 0, 00:14:57.114 "tls_version": 0, 00:14:57.114 "enable_ktls": false 00:14:57.114 } 00:14:57.114 } 00:14:57.114 ] 00:14:57.114 }, 00:14:57.114 { 00:14:57.114 "subsystem": "vmd", 00:14:57.114 "config": [] 00:14:57.114 }, 00:14:57.114 { 00:14:57.114 "subsystem": "accel", 00:14:57.114 "config": [ 00:14:57.114 { 00:14:57.114 "method": "accel_set_options", 00:14:57.114 "params": { 00:14:57.114 "small_cache_size": 128, 00:14:57.114 "large_cache_size": 16, 00:14:57.114 "task_count": 2048, 00:14:57.114 "sequence_count": 2048, 00:14:57.114 "buf_count": 2048 00:14:57.114 } 00:14:57.114 } 00:14:57.114 ] 00:14:57.114 }, 00:14:57.114 { 00:14:57.114 "subsystem": "bdev", 00:14:57.114 "config": [ 00:14:57.114 { 00:14:57.114 "method": "bdev_set_options", 00:14:57.114 "params": { 00:14:57.114 "bdev_io_pool_size": 65535, 00:14:57.114 "bdev_io_cache_size": 256, 00:14:57.114 "bdev_auto_examine": true, 00:14:57.114 "iobuf_small_cache_size": 128, 00:14:57.114 "iobuf_large_cache_size": 16 00:14:57.114 } 00:14:57.114 }, 00:14:57.114 { 00:14:57.114 "method": "bdev_raid_set_options", 00:14:57.114 "params": { 00:14:57.114 "process_window_size_kb": 1024 00:14:57.114 } 00:14:57.114 }, 00:14:57.114 { 00:14:57.114 "method": "bdev_iscsi_set_options", 00:14:57.114 "params": { 00:14:57.114 "timeout_sec": 30 00:14:57.114 } 00:14:57.114 }, 00:14:57.114 { 00:14:57.114 "method": "bdev_nvme_set_options", 00:14:57.114 "params": { 00:14:57.114 "action_on_timeout": "none", 00:14:57.114 "timeout_us": 0, 00:14:57.114 "timeout_admin_us": 0, 00:14:57.114 "keep_alive_timeout_ms": 10000, 00:14:57.114 "arbitration_burst": 0, 00:14:57.114 "low_priority_weight": 0, 00:14:57.114 "medium_priority_weight": 0, 00:14:57.114 "high_priority_weight": 0, 00:14:57.114 "nvme_adminq_poll_period_us": 10000, 00:14:57.114 "nvme_ioq_poll_period_us": 0, 00:14:57.114 "io_queue_requests": 0, 00:14:57.115 "delay_cmd_submit": true, 00:14:57.115 "transport_retry_count": 4, 00:14:57.115 "bdev_retry_count": 3, 00:14:57.115 "transport_ack_timeout": 0, 00:14:57.115 "ctrlr_loss_timeout_sec": 0, 00:14:57.115 "reconnect_delay_sec": 0, 00:14:57.115 "fast_io_fail_timeout_sec": 0, 00:14:57.115 "disable_auto_failback": false, 00:14:57.115 "generate_uuids": false, 00:14:57.115 "transport_tos": 0, 00:14:57.115 "nvme_error_stat": false, 00:14:57.115 "rdma_srq_size": 0, 00:14:57.115 "io_path_stat": false, 00:14:57.115 "allow_accel_sequence": false, 00:14:57.115 "rdma_max_cq_size": 0, 00:14:57.115 "rdma_cm_event_timeout_ms": 0, 00:14:57.115 "dhchap_digests": [ 00:14:57.115 "sha256", 00:14:57.115 "sha384", 00:14:57.115 "sha512" 00:14:57.115 ], 00:14:57.115 "dhchap_dhgroups": [ 00:14:57.115 "null", 00:14:57.115 "ffdhe2048", 00:14:57.115 "ffdhe3072", 00:14:57.115 "ffdhe4096", 00:14:57.115 "ffdhe6144", 00:14:57.115 "ffdhe8192" 00:14:57.115 ] 00:14:57.115 } 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "method": "bdev_nvme_set_hotplug", 00:14:57.115 "params": { 00:14:57.115 "period_us": 100000, 00:14:57.115 "enable": false 00:14:57.115 } 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "method": "bdev_malloc_create", 00:14:57.115 "params": { 00:14:57.115 "name": "malloc0", 00:14:57.115 "num_blocks": 8192, 00:14:57.115 "block_size": 4096, 00:14:57.115 "physical_block_size": 4096, 00:14:57.115 "uuid": "bd74302b-d190-4f57-9d50-8136c1df90f7", 00:14:57.115 "optimal_io_boundary": 0 00:14:57.115 } 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "method": "bdev_wait_for_examine" 00:14:57.115 } 00:14:57.115 ] 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "subsystem": "nbd", 00:14:57.115 "config": [] 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "subsystem": "scheduler", 00:14:57.115 "config": [ 00:14:57.115 { 00:14:57.115 "method": "framework_set_scheduler", 00:14:57.115 "params": { 00:14:57.115 "name": "static" 00:14:57.115 } 00:14:57.115 } 00:14:57.115 ] 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "subsystem": "nvmf", 00:14:57.115 "config": [ 00:14:57.115 { 00:14:57.115 "method": "nvmf_set_config", 00:14:57.115 "params": { 00:14:57.115 "discovery_filter": "match_any", 00:14:57.115 "admin_cmd_passthru": { 00:14:57.115 "identify_ctrlr": false 00:14:57.115 } 00:14:57.115 } 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "method": "nvmf_set_max_subsystems", 00:14:57.115 "params": { 00:14:57.115 "max_subsystems": 1024 00:14:57.115 } 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "method": "nvmf_set_crdt", 00:14:57.115 "params": { 00:14:57.115 "crdt1": 0, 00:14:57.115 "crdt2": 0, 00:14:57.115 "crdt3": 0 00:14:57.115 } 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "method": "nvmf_create_transport", 00:14:57.115 "params": { 00:14:57.115 "trtype": "TCP", 00:14:57.115 "max_queue_depth": 128, 00:14:57.115 "max_io_qpairs_per_ctrlr": 127, 00:14:57.115 "in_capsule_data_size": 4096, 00:14:57.115 "max_io_size": 131072, 00:14:57.115 "io_unit_size": 131072, 00:14:57.115 "max_aq_depth": 128, 00:14:57.115 "num_shared_buffers": 511, 00:14:57.115 "buf_cache_size": 4294967295, 00:14:57.115 "dif_insert_or_strip": false, 00:14:57.115 "zcopy": false, 00:14:57.115 "c2h_success": false, 00:14:57.115 "sock_priority": 0, 00:14:57.115 "abort_timeout_sec": 1, 00:14:57.115 "ack_timeout": 0 00:14:57.115 } 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "method": "nvmf_create_subsystem", 00:14:57.115 "params": { 00:14:57.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.115 "allow_any_host": false, 00:14:57.115 "serial_number": "00000000000000000000", 00:14:57.115 "model_number": "SPDK bdev Controller", 00:14:57.115 "max_namespaces": 32, 00:14:57.115 "min_cntlid": 1, 00:14:57.115 "max_cntlid": 65519, 00:14:57.115 "ana_reporting": false 00:14:57.115 } 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "method": "nvmf_subsystem_add_host", 00:14:57.115 "params": { 00:14:57.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.115 "host": "nqn.2016-06.io.spdk:host1", 00:14:57.115 "psk": "key0" 00:14:57.115 } 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "method": "nvmf_subsystem_add_ns", 00:14:57.115 "params": { 00:14:57.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.115 "namespace": { 00:14:57.115 "nsid": 1, 00:14:57.115 "bdev_name": "malloc0", 00:14:57.115 "nguid": "BD74302BD1904F579D508136C1DF90F7", 00:14:57.115 "uuid": "bd74302b-d190-4f57-9d50-8136c1df90f7", 00:14:57.115 "no_auto_visible": false 00:14:57.115 } 00:14:57.115 } 00:14:57.115 }, 00:14:57.115 { 00:14:57.115 "method": "nvmf_subsystem_add_listener", 00:14:57.115 "params": { 00:14:57.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.115 "listen_address": { 00:14:57.115 "trtype": "TCP", 00:14:57.115 "adrfam": "IPv4", 00:14:57.115 "traddr": "10.0.0.2", 00:14:57.115 "trsvcid": "4420" 00:14:57.115 }, 00:14:57.115 "secure_channel": true 00:14:57.115 } 00:14:57.115 } 00:14:57.115 ] 00:14:57.115 } 00:14:57.115 ] 00:14:57.115 }' 00:14:57.115 17:02:12 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:57.376 17:02:12 -- target/tls.sh@264 -- # bperfcfg='{ 00:14:57.376 "subsystems": [ 00:14:57.376 { 00:14:57.376 "subsystem": "keyring", 00:14:57.376 "config": [ 00:14:57.376 { 00:14:57.376 "method": "keyring_file_add_key", 00:14:57.376 "params": { 00:14:57.376 "name": "key0", 00:14:57.376 "path": "/tmp/tmp.rWc4tKrkDB" 00:14:57.376 } 00:14:57.376 } 00:14:57.376 ] 00:14:57.376 }, 00:14:57.376 { 00:14:57.376 "subsystem": "iobuf", 00:14:57.376 "config": [ 00:14:57.376 { 00:14:57.376 "method": "iobuf_set_options", 00:14:57.376 "params": { 00:14:57.376 "small_pool_count": 8192, 00:14:57.376 "large_pool_count": 1024, 00:14:57.376 "small_bufsize": 8192, 00:14:57.376 "large_bufsize": 135168 00:14:57.376 } 00:14:57.376 } 00:14:57.376 ] 00:14:57.376 }, 00:14:57.376 { 00:14:57.376 "subsystem": "sock", 00:14:57.376 "config": [ 00:14:57.376 { 00:14:57.376 "method": "sock_impl_set_options", 00:14:57.376 "params": { 00:14:57.376 "impl_name": "posix", 00:14:57.376 "recv_buf_size": 2097152, 00:14:57.376 "send_buf_size": 2097152, 00:14:57.376 "enable_recv_pipe": true, 00:14:57.376 "enable_quickack": false, 00:14:57.376 "enable_placement_id": 0, 00:14:57.376 "enable_zerocopy_send_server": true, 00:14:57.376 "enable_zerocopy_send_client": false, 00:14:57.376 "zerocopy_threshold": 0, 00:14:57.376 "tls_version": 0, 00:14:57.376 "enable_ktls": false 00:14:57.376 } 00:14:57.376 }, 00:14:57.376 { 00:14:57.376 "method": "sock_impl_set_options", 00:14:57.376 "params": { 00:14:57.376 "impl_name": "ssl", 00:14:57.376 "recv_buf_size": 4096, 00:14:57.376 "send_buf_size": 4096, 00:14:57.376 "enable_recv_pipe": true, 00:14:57.376 "enable_quickack": false, 00:14:57.376 "enable_placement_id": 0, 00:14:57.376 "enable_zerocopy_send_server": true, 00:14:57.376 "enable_zerocopy_send_client": false, 00:14:57.376 "zerocopy_threshold": 0, 00:14:57.376 "tls_version": 0, 00:14:57.376 "enable_ktls": false 00:14:57.376 } 00:14:57.376 } 00:14:57.376 ] 00:14:57.376 }, 00:14:57.376 { 00:14:57.376 "subsystem": "vmd", 00:14:57.376 "config": [] 00:14:57.376 }, 00:14:57.376 { 00:14:57.376 "subsystem": "accel", 00:14:57.376 "config": [ 00:14:57.376 { 00:14:57.376 "method": "accel_set_options", 00:14:57.376 "params": { 00:14:57.376 "small_cache_size": 128, 00:14:57.376 "large_cache_size": 16, 00:14:57.376 "task_count": 2048, 00:14:57.376 "sequence_count": 2048, 00:14:57.376 "buf_count": 2048 00:14:57.376 } 00:14:57.376 } 00:14:57.376 ] 00:14:57.376 }, 00:14:57.376 { 00:14:57.376 "subsystem": "bdev", 00:14:57.376 "config": [ 00:14:57.376 { 00:14:57.376 "method": "bdev_set_options", 00:14:57.376 "params": { 00:14:57.376 "bdev_io_pool_size": 65535, 00:14:57.376 "bdev_io_cache_size": 256, 00:14:57.376 "bdev_auto_examine": true, 00:14:57.376 "iobuf_small_cache_size": 128, 00:14:57.376 "iobuf_large_cache_size": 16 00:14:57.376 } 00:14:57.376 }, 00:14:57.376 { 00:14:57.376 "method": "bdev_raid_set_options", 00:14:57.376 "params": { 00:14:57.376 "process_window_size_kb": 1024 00:14:57.376 } 00:14:57.376 }, 00:14:57.376 { 00:14:57.376 "method": "bdev_iscsi_set_options", 00:14:57.376 "params": { 00:14:57.376 "timeout_sec": 30 00:14:57.376 } 00:14:57.376 }, 00:14:57.376 { 00:14:57.376 "method": "bdev_nvme_set_options", 00:14:57.376 "params": { 00:14:57.376 "action_on_timeout": "none", 00:14:57.376 "timeout_us": 0, 00:14:57.376 "timeout_admin_us": 0, 00:14:57.376 "keep_alive_timeout_ms": 10000, 00:14:57.376 "arbitration_burst": 0, 00:14:57.376 "low_priority_weight": 0, 00:14:57.376 "medium_priority_weight": 0, 00:14:57.376 "high_priority_weight": 0, 00:14:57.376 "nvme_adminq_poll_period_us": 10000, 00:14:57.376 "nvme_ioq_poll_period_us": 0, 00:14:57.376 "io_queue_requests": 512, 00:14:57.376 "delay_cmd_submit": true, 00:14:57.376 "transport_retry_count": 4, 00:14:57.376 "bdev_retry_count": 3, 00:14:57.376 "transport_ack_timeout": 0, 00:14:57.376 "ctrlr_loss_timeout_sec": 0, 00:14:57.377 "reconnect_delay_sec": 0, 00:14:57.377 "fast_io_fail_timeout_sec": 0, 00:14:57.377 "disable_auto_failback": false, 00:14:57.377 "generate_uuids": false, 00:14:57.377 "transport_tos": 0, 00:14:57.377 "nvme_error_stat": false, 00:14:57.377 "rdma_srq_size": 0, 00:14:57.377 "io_path_stat": false, 00:14:57.377 "allow_accel_sequence": false, 00:14:57.377 "rdma_max_cq_size": 0, 00:14:57.377 "rdma_cm_event_timeout_ms": 0, 00:14:57.377 "dhchap_digests": [ 00:14:57.377 "sha256", 00:14:57.377 "sha384", 00:14:57.377 "sha512" 00:14:57.377 ], 00:14:57.377 "dhchap_dhgroups": [ 00:14:57.377 "null", 00:14:57.377 "ffdhe2048", 00:14:57.377 "ffdhe3072", 00:14:57.377 "ffdhe4096", 00:14:57.377 "ffdhe6144", 00:14:57.377 "ffdhe8192" 00:14:57.377 ] 00:14:57.377 } 00:14:57.377 }, 00:14:57.377 { 00:14:57.377 "method": "bdev_nvme_attach_controller", 00:14:57.377 "params": { 00:14:57.377 "name": "nvme0", 00:14:57.377 "trtype": "TCP", 00:14:57.377 "adrfam": "IPv4", 00:14:57.377 "traddr": "10.0.0.2", 00:14:57.377 "trsvcid": "4420", 00:14:57.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.377 "prchk_reftag": false, 00:14:57.377 "prchk_guard": false, 00:14:57.377 "ctrlr_loss_timeout_sec": 0, 00:14:57.377 "reconnect_delay_sec": 0, 00:14:57.377 "fast_io_fail_timeout_sec": 0, 00:14:57.377 "psk": "key0", 00:14:57.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:57.377 "hdgst": false, 00:14:57.377 "ddgst": false 00:14:57.377 } 00:14:57.377 }, 00:14:57.377 { 00:14:57.377 "method": "bdev_nvme_set_hotplug", 00:14:57.377 "params": { 00:14:57.377 "period_us": 100000, 00:14:57.377 "enable": false 00:14:57.377 } 00:14:57.377 }, 00:14:57.377 { 00:14:57.377 "method": "bdev_enable_histogram", 00:14:57.377 "params": { 00:14:57.377 "name": "nvme0n1", 00:14:57.377 "enable": true 00:14:57.377 } 00:14:57.377 }, 00:14:57.377 { 00:14:57.377 "method": "bdev_wait_for_examine" 00:14:57.377 } 00:14:57.377 ] 00:14:57.377 }, 00:14:57.377 { 00:14:57.377 "subsystem": "nbd", 00:14:57.377 "config": [] 00:14:57.377 } 00:14:57.377 ] 00:14:57.377 }' 00:14:57.377 17:02:12 -- target/tls.sh@266 -- # killprocess 1693485 00:14:57.377 17:02:12 -- common/autotest_common.sh@936 -- # '[' -z 1693485 ']' 00:14:57.377 17:02:12 -- common/autotest_common.sh@940 -- # kill -0 1693485 00:14:57.377 17:02:12 -- common/autotest_common.sh@941 -- # uname 00:14:57.377 17:02:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:57.377 17:02:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1693485 00:14:57.377 17:02:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:57.377 17:02:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:57.377 17:02:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1693485' 00:14:57.377 killing process with pid 1693485 00:14:57.377 17:02:12 -- common/autotest_common.sh@955 -- # kill 1693485 00:14:57.377 Received shutdown signal, test time was about 1.000000 seconds 00:14:57.377 00:14:57.377 Latency(us) 00:14:57.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.377 =================================================================================================================== 00:14:57.377 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:57.377 17:02:12 -- common/autotest_common.sh@960 -- # wait 1693485 00:14:57.657 17:02:13 -- target/tls.sh@267 -- # killprocess 1693466 00:14:57.657 17:02:13 -- common/autotest_common.sh@936 -- # '[' -z 1693466 ']' 00:14:57.657 17:02:13 -- common/autotest_common.sh@940 -- # kill -0 1693466 00:14:57.657 17:02:13 -- common/autotest_common.sh@941 -- # uname 00:14:57.657 17:02:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:57.657 17:02:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1693466 00:14:57.657 17:02:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:57.657 17:02:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:57.657 17:02:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1693466' 00:14:57.657 killing process with pid 1693466 00:14:57.657 17:02:13 -- common/autotest_common.sh@955 -- # kill 1693466 00:14:57.657 17:02:13 -- common/autotest_common.sh@960 -- # wait 1693466 00:14:57.925 17:02:13 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:57.925 17:02:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:57.925 17:02:13 -- target/tls.sh@269 -- # echo '{ 00:14:57.925 "subsystems": [ 00:14:57.925 { 00:14:57.925 "subsystem": "keyring", 00:14:57.925 "config": [ 00:14:57.925 { 00:14:57.925 "method": "keyring_file_add_key", 00:14:57.925 "params": { 00:14:57.925 "name": "key0", 00:14:57.925 "path": "/tmp/tmp.rWc4tKrkDB" 00:14:57.925 } 00:14:57.925 } 00:14:57.925 ] 00:14:57.925 }, 00:14:57.925 { 00:14:57.925 "subsystem": "iobuf", 00:14:57.925 "config": [ 00:14:57.925 { 00:14:57.925 "method": "iobuf_set_options", 00:14:57.925 "params": { 00:14:57.925 "small_pool_count": 8192, 00:14:57.925 "large_pool_count": 1024, 00:14:57.925 "small_bufsize": 8192, 00:14:57.925 "large_bufsize": 135168 00:14:57.925 } 00:14:57.925 } 00:14:57.925 ] 00:14:57.925 }, 00:14:57.925 { 00:14:57.925 "subsystem": "sock", 00:14:57.925 "config": [ 00:14:57.925 { 00:14:57.925 "method": "sock_impl_set_options", 00:14:57.925 "params": { 00:14:57.925 "impl_name": "posix", 00:14:57.925 "recv_buf_size": 2097152, 00:14:57.926 "send_buf_size": 2097152, 00:14:57.926 "enable_recv_pipe": true, 00:14:57.926 "enable_quickack": false, 00:14:57.926 "enable_placement_id": 0, 00:14:57.926 "enable_zerocopy_send_server": true, 00:14:57.926 "enable_zerocopy_send_client": false, 00:14:57.926 "zerocopy_threshold": 0, 00:14:57.926 "tls_version": 0, 00:14:57.926 "enable_ktls": false 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "sock_impl_set_options", 00:14:57.926 "params": { 00:14:57.926 "impl_name": "ssl", 00:14:57.926 "recv_buf_size": 4096, 00:14:57.926 "send_buf_size": 4096, 00:14:57.926 "enable_recv_pipe": true, 00:14:57.926 "enable_quickack": false, 00:14:57.926 "enable_placement_id": 0, 00:14:57.926 "enable_zerocopy_send_server": true, 00:14:57.926 "enable_zerocopy_send_client": false, 00:14:57.926 "zerocopy_threshold": 0, 00:14:57.926 "tls_version": 0, 00:14:57.926 "enable_ktls": false 00:14:57.926 } 00:14:57.926 } 00:14:57.926 ] 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "subsystem": "vmd", 00:14:57.926 "config": [] 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "subsystem": "accel", 00:14:57.926 "config": [ 00:14:57.926 { 00:14:57.926 "method": "accel_set_options", 00:14:57.926 "params": { 00:14:57.926 "small_cache_size": 128, 00:14:57.926 "large_cache_size": 16, 00:14:57.926 "task_count": 2048, 00:14:57.926 "sequence_count": 2048, 00:14:57.926 "buf_count": 2048 00:14:57.926 } 00:14:57.926 } 00:14:57.926 ] 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "subsystem": "bdev", 00:14:57.926 "config": [ 00:14:57.926 { 00:14:57.926 "method": "bdev_set_options", 00:14:57.926 "params": { 00:14:57.926 "bdev_io_pool_size": 65535, 00:14:57.926 "bdev_io_cache_size": 256, 00:14:57.926 "bdev_auto_examine": true, 00:14:57.926 "iobuf_small_cache_size": 128, 00:14:57.926 "iobuf_large_cache_size": 16 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "bdev_raid_set_options", 00:14:57.926 "params": { 00:14:57.926 "process_window_size_kb": 1024 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "bdev_iscsi_set_options", 00:14:57.926 "params": { 00:14:57.926 "timeout_sec": 30 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "bdev_nvme_set_options", 00:14:57.926 "params": { 00:14:57.926 "action_on_timeout": "none", 00:14:57.926 "timeout_us": 0, 00:14:57.926 "timeout_admin_us": 0, 00:14:57.926 "keep_alive_timeout_ms": 10000, 00:14:57.926 "arbitration_burst": 0, 00:14:57.926 "low_priority_weight": 0, 00:14:57.926 "medium_priority_weight": 0, 00:14:57.926 "high_priority_weight": 0, 00:14:57.926 "nvme_adminq_poll_period_us": 10000, 00:14:57.926 "nvme_ioq_poll_period_us": 0, 00:14:57.926 "io_queue_requests": 0, 00:14:57.926 "delay_cmd_submit": true, 00:14:57.926 "transport_retry_count": 4, 00:14:57.926 "bdev_retry_count": 3, 00:14:57.926 "transport_ack_timeout": 0, 00:14:57.926 "ctrlr_loss_timeout_sec": 0, 00:14:57.926 "reconnect_delay_sec": 0, 00:14:57.926 "fast_io_fail_timeout_sec": 0, 00:14:57.926 "disable_auto_failback": false, 00:14:57.926 "generate_uuids": false, 00:14:57.926 "transport_tos": 0, 00:14:57.926 "nvme_error_stat": false, 00:14:57.926 "rdma_srq_size": 0, 00:14:57.926 "io_path_stat": false, 00:14:57.926 "allow_accel_sequence": false, 00:14:57.926 "rdma_max_cq_size": 0, 00:14:57.926 "rdma_cm_event_timeout_ms": 0, 00:14:57.926 "dhchap_digests": [ 00:14:57.926 "sha256", 00:14:57.926 "sha384", 00:14:57.926 "sha512" 00:14:57.926 ], 00:14:57.926 "dhchap_dhgroups": [ 00:14:57.926 "null", 00:14:57.926 "ffdhe2048", 00:14:57.926 "ffdhe3072", 00:14:57.926 "ffdhe4096", 00:14:57.926 "ffdhe6144", 00:14:57.926 "ffdhe8192" 00:14:57.926 ] 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "bdev_nvme_set_hotplug", 00:14:57.926 "params": { 00:14:57.926 "period_us": 100000, 00:14:57.926 "enable": false 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "bdev_malloc_create", 00:14:57.926 "params": { 00:14:57.926 "name": "malloc0", 00:14:57.926 "num_blocks": 8192, 00:14:57.926 "block_size": 4096, 00:14:57.926 "physical_block_size": 4096, 00:14:57.926 "uuid": "bd74302b-d190-4f57-9d50-8136c1df90f7", 00:14:57.926 "optimal_io_boundary": 0 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "bdev_wait_for_examine" 00:14:57.926 } 00:14:57.926 ] 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "subsystem": "nbd", 00:14:57.926 "config": [] 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "subsystem": "scheduler", 00:14:57.926 "config": [ 00:14:57.926 { 00:14:57.926 "method": "framework_set_scheduler", 00:14:57.926 "params": { 00:14:57.926 "name": "static" 00:14:57.926 } 00:14:57.926 } 00:14:57.926 ] 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "subsystem": "nvmf", 00:14:57.926 "config": [ 00:14:57.926 { 00:14:57.926 "method": "nvmf_set_config", 00:14:57.926 "params": { 00:14:57.926 "discovery_filter": "match_any", 00:14:57.926 "admin_cmd_passthru": { 00:14:57.926 "identify_ctrlr": false 00:14:57.926 } 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "nvmf_set_max_subsystems", 00:14:57.926 "params": { 00:14:57.926 "max_subsystems": 1024 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "nvmf_set_crdt", 00:14:57.926 "params": { 00:14:57.926 "crdt1": 0, 00:14:57.926 "crdt2": 0, 00:14:57.926 "crdt3": 0 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "nvmf_create_transport", 00:14:57.926 "params": { 00:14:57.926 "trtype": "TCP", 00:14:57.926 "max_queue_depth": 128, 00:14:57.926 "max_io_qpairs_per_ctrlr": 127, 00:14:57.926 "in_capsule_data_size": 4096, 00:14:57.926 "max_io_size": 131072, 00:14:57.926 "io_unit_size": 131072, 00:14:57.926 "max_aq_depth": 128, 00:14:57.926 "num_shared_buffers": 511, 00:14:57.926 "buf_cache_size": 4294967295, 00:14:57.926 "dif_insert_or_strip": false, 00:14:57.926 "zcopy": false, 00:14:57.926 "c2h_success": false, 00:14:57.926 "sock_priority": 0, 00:14:57.926 "abort_timeout_sec": 1, 00:14:57.926 "ack_timeout": 0 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "nvmf_create_subsystem", 00:14:57.926 "params": { 00:14:57.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.926 "allow_any_host": false, 00:14:57.926 "serial_number": "00000000000000000000", 00:14:57.926 "model_number": "SPDK bdev Controller", 00:14:57.926 "max_namespaces": 32, 00:14:57.926 "min_cntlid": 1, 00:14:57.926 "max_cntlid": 65519, 00:14:57.926 "ana_reporting": false 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "nvmf_subsystem_add_host", 00:14:57.926 "params": { 00:14:57.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.926 "host": "nqn.2016-06.io.spdk:host1", 00:14:57.926 "psk": "key0" 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "nvmf_subsystem_add_ns", 00:14:57.926 "params": { 00:14:57.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.926 "namespace": { 00:14:57.926 "nsid": 1, 00:14:57.926 "bdev_name": "malloc0", 00:14:57.926 "nguid": "BD74302BD1904F579D508136C1DF90F7", 00:14:57.926 "uuid": "bd74302b-d190-4f57-9d50-8136c1df90f7", 00:14:57.926 "no_auto_visible": false 00:14:57.926 } 00:14:57.926 } 00:14:57.926 }, 00:14:57.926 { 00:14:57.926 "method": "nvmf_subsystem_add_listener", 00:14:57.926 "params": { 00:14:57.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.926 "listen_address": { 00:14:57.926 "trtype": "TCP", 00:14:57.926 "adrfam": "IPv4", 00:14:57.926 "traddr": "10.0.0.2", 00:14:57.926 "trsvcid": "4420" 00:14:57.926 }, 00:14:57.926 "secure_channel": true 00:14:57.926 } 00:14:57.926 } 00:14:57.926 ] 00:14:57.926 } 00:14:57.926 ] 00:14:57.926 }' 00:14:57.926 17:02:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:57.926 17:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:57.926 17:02:13 -- nvmf/common.sh@470 -- # nvmfpid=1693895 00:14:57.926 17:02:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:57.926 17:02:13 -- nvmf/common.sh@471 -- # waitforlisten 1693895 00:14:57.926 17:02:13 -- common/autotest_common.sh@817 -- # '[' -z 1693895 ']' 00:14:57.926 17:02:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.926 17:02:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:57.926 17:02:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.926 17:02:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:57.926 17:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:57.926 [2024-04-18 17:02:13.591406] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:57.927 [2024-04-18 17:02:13.591490] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.927 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.187 [2024-04-18 17:02:13.669739] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.187 [2024-04-18 17:02:13.802749] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.187 [2024-04-18 17:02:13.802824] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.187 [2024-04-18 17:02:13.802866] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.187 [2024-04-18 17:02:13.802888] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.187 [2024-04-18 17:02:13.802921] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.187 [2024-04-18 17:02:13.803037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.446 [2024-04-18 17:02:14.032852] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.446 [2024-04-18 17:02:14.064861] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:58.446 [2024-04-18 17:02:14.083526] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.011 17:02:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:59.011 17:02:14 -- common/autotest_common.sh@850 -- # return 0 00:14:59.011 17:02:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:59.011 17:02:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:59.011 17:02:14 -- common/autotest_common.sh@10 -- # set +x 00:14:59.011 17:02:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.011 17:02:14 -- target/tls.sh@272 -- # bdevperf_pid=1694054 00:14:59.011 17:02:14 -- target/tls.sh@273 -- # waitforlisten 1694054 /var/tmp/bdevperf.sock 00:14:59.011 17:02:14 -- common/autotest_common.sh@817 -- # '[' -z 1694054 ']' 00:14:59.011 17:02:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:59.011 17:02:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:59.011 17:02:14 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:59.011 17:02:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:59.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:59.011 17:02:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:59.011 17:02:14 -- common/autotest_common.sh@10 -- # set +x 00:14:59.011 17:02:14 -- target/tls.sh@270 -- # echo '{ 00:14:59.011 "subsystems": [ 00:14:59.011 { 00:14:59.011 "subsystem": "keyring", 00:14:59.011 "config": [ 00:14:59.011 { 00:14:59.011 "method": "keyring_file_add_key", 00:14:59.012 "params": { 00:14:59.012 "name": "key0", 00:14:59.012 "path": "/tmp/tmp.rWc4tKrkDB" 00:14:59.012 } 00:14:59.012 } 00:14:59.012 ] 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "subsystem": "iobuf", 00:14:59.012 "config": [ 00:14:59.012 { 00:14:59.012 "method": "iobuf_set_options", 00:14:59.012 "params": { 00:14:59.012 "small_pool_count": 8192, 00:14:59.012 "large_pool_count": 1024, 00:14:59.012 "small_bufsize": 8192, 00:14:59.012 "large_bufsize": 135168 00:14:59.012 } 00:14:59.012 } 00:14:59.012 ] 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "subsystem": "sock", 00:14:59.012 "config": [ 00:14:59.012 { 00:14:59.012 "method": "sock_impl_set_options", 00:14:59.012 "params": { 00:14:59.012 "impl_name": "posix", 00:14:59.012 "recv_buf_size": 2097152, 00:14:59.012 "send_buf_size": 2097152, 00:14:59.012 "enable_recv_pipe": true, 00:14:59.012 "enable_quickack": false, 00:14:59.012 "enable_placement_id": 0, 00:14:59.012 "enable_zerocopy_send_server": true, 00:14:59.012 "enable_zerocopy_send_client": false, 00:14:59.012 "zerocopy_threshold": 0, 00:14:59.012 "tls_version": 0, 00:14:59.012 "enable_ktls": false 00:14:59.012 } 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "method": "sock_impl_set_options", 00:14:59.012 "params": { 00:14:59.012 "impl_name": "ssl", 00:14:59.012 "recv_buf_size": 4096, 00:14:59.012 "send_buf_size": 4096, 00:14:59.012 "enable_recv_pipe": true, 00:14:59.012 "enable_quickack": false, 00:14:59.012 "enable_placement_id": 0, 00:14:59.012 "enable_zerocopy_send_server": true, 00:14:59.012 "enable_zerocopy_send_client": false, 00:14:59.012 "zerocopy_threshold": 0, 00:14:59.012 "tls_version": 0, 00:14:59.012 "enable_ktls": false 00:14:59.012 } 00:14:59.012 } 00:14:59.012 ] 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "subsystem": "vmd", 00:14:59.012 "config": [] 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "subsystem": "accel", 00:14:59.012 "config": [ 00:14:59.012 { 00:14:59.012 "method": "accel_set_options", 00:14:59.012 "params": { 00:14:59.012 "small_cache_size": 128, 00:14:59.012 "large_cache_size": 16, 00:14:59.012 "task_count": 2048, 00:14:59.012 "sequence_count": 2048, 00:14:59.012 "buf_count": 2048 00:14:59.012 } 00:14:59.012 } 00:14:59.012 ] 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "subsystem": "bdev", 00:14:59.012 "config": [ 00:14:59.012 { 00:14:59.012 "method": "bdev_set_options", 00:14:59.012 "params": { 00:14:59.012 "bdev_io_pool_size": 65535, 00:14:59.012 "bdev_io_cache_size": 256, 00:14:59.012 "bdev_auto_examine": true, 00:14:59.012 "iobuf_small_cache_size": 128, 00:14:59.012 "iobuf_large_cache_size": 16 00:14:59.012 } 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "method": "bdev_raid_set_options", 00:14:59.012 "params": { 00:14:59.012 "process_window_size_kb": 1024 00:14:59.012 } 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "method": "bdev_iscsi_set_options", 00:14:59.012 "params": { 00:14:59.012 "timeout_sec": 30 00:14:59.012 } 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "method": "bdev_nvme_set_options", 00:14:59.012 "params": { 00:14:59.012 "action_on_timeout": "none", 00:14:59.012 "timeout_us": 0, 00:14:59.012 "timeout_admin_us": 0, 00:14:59.012 "keep_alive_timeout_ms": 10000, 00:14:59.012 "arbitration_burst": 0, 00:14:59.012 "low_priority_weight": 0, 00:14:59.012 "medium_priority_weight": 0, 00:14:59.012 "high_priority_weight": 0, 00:14:59.012 "nvme_adminq_poll_period_us": 10000, 00:14:59.012 "nvme_ioq_poll_period_us": 0, 00:14:59.012 "io_queue_requests": 512, 00:14:59.012 "delay_cmd_submit": true, 00:14:59.012 "transport_retry_count": 4, 00:14:59.012 "bdev_retry_count": 3, 00:14:59.012 "transport_ack_timeout": 0, 00:14:59.012 "ctrlr_loss_timeout_sec": 0, 00:14:59.012 "reconnect_delay_sec": 0, 00:14:59.012 "fast_io_fail_timeout_sec": 0, 00:14:59.012 "disable_auto_failback": false, 00:14:59.012 "generate_uuids": false, 00:14:59.012 "transport_tos": 0, 00:14:59.012 "nvme_error_stat": false, 00:14:59.012 "rdma_srq_size": 0, 00:14:59.012 "io_path_stat": false, 00:14:59.012 "allow_accel_sequence": false, 00:14:59.012 "rdma_max_cq_size": 0, 00:14:59.012 "rdma_cm_event_timeout_ms": 0, 00:14:59.012 "dhchap_digests": [ 00:14:59.012 "sha256", 00:14:59.012 "sha384", 00:14:59.012 "sha512" 00:14:59.012 ], 00:14:59.012 "dhchap_dhgroups": [ 00:14:59.012 "null", 00:14:59.012 "ffdhe2048", 00:14:59.012 "ffdhe3072", 00:14:59.012 "ffdhe4096", 00:14:59.012 "ffdhe6144", 00:14:59.012 "ffdhe8192" 00:14:59.012 ] 00:14:59.012 } 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "method": "bdev_nvme_attach_controller", 00:14:59.012 "params": { 00:14:59.012 "name": "nvme0", 00:14:59.012 "trtype": "TCP", 00:14:59.012 "adrfam": "IPv4", 00:14:59.012 "traddr": "10.0.0.2", 00:14:59.012 "trsvcid": "4420", 00:14:59.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:59.012 "prchk_reftag": false, 00:14:59.012 "prchk_guard": false, 00:14:59.012 "ctrlr_loss_timeout_sec": 0, 00:14:59.012 "reconnect_delay_sec": 0, 00:14:59.012 "fast_io_fail_timeout_sec": 0, 00:14:59.012 "psk": "key0", 00:14:59.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:59.012 "hdgst": false, 00:14:59.012 "ddgst": false 00:14:59.012 } 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "method": "bdev_nvme_set_hotplug", 00:14:59.012 "params": { 00:14:59.012 "period_us": 100000, 00:14:59.012 "enable": false 00:14:59.012 } 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "method": "bdev_enable_histogram", 00:14:59.012 "params": { 00:14:59.012 "name": "nvme0n1", 00:14:59.012 "enable": true 00:14:59.012 } 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "method": "bdev_wait_for_examine" 00:14:59.012 } 00:14:59.012 ] 00:14:59.012 }, 00:14:59.012 { 00:14:59.012 "subsystem": "nbd", 00:14:59.012 "config": [] 00:14:59.012 } 00:14:59.012 ] 00:14:59.012 }' 00:14:59.012 [2024-04-18 17:02:14.709561] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:59.012 [2024-04-18 17:02:14.709651] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1694054 ] 00:14:59.272 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.272 [2024-04-18 17:02:14.771496] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.272 [2024-04-18 17:02:14.884955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.530 [2024-04-18 17:02:15.062451] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:00.096 17:02:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:00.096 17:02:15 -- common/autotest_common.sh@850 -- # return 0 00:15:00.096 17:02:15 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:00.096 17:02:15 -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:00.354 17:02:15 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.354 17:02:15 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.354 Running I/O for 1 seconds... 00:15:01.733 00:15:01.733 Latency(us) 00:15:01.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.733 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:01.733 Verification LBA range: start 0x0 length 0x2000 00:15:01.733 nvme0n1 : 1.02 3406.24 13.31 0.00 0.00 37128.03 10048.85 41943.04 00:15:01.733 =================================================================================================================== 00:15:01.733 Total : 3406.24 13.31 0.00 0.00 37128.03 10048.85 41943.04 00:15:01.733 0 00:15:01.733 17:02:17 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:01.733 17:02:17 -- target/tls.sh@279 -- # cleanup 00:15:01.733 17:02:17 -- target/tls.sh@15 -- # process_shm --id 0 00:15:01.733 17:02:17 -- common/autotest_common.sh@794 -- # type=--id 00:15:01.733 17:02:17 -- common/autotest_common.sh@795 -- # id=0 00:15:01.733 17:02:17 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:01.733 17:02:17 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:01.733 17:02:17 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:01.733 17:02:17 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:01.733 17:02:17 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:01.733 17:02:17 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:01.733 nvmf_trace.0 00:15:01.733 17:02:17 -- common/autotest_common.sh@809 -- # return 0 00:15:01.733 17:02:17 -- target/tls.sh@16 -- # killprocess 1694054 00:15:01.733 17:02:17 -- common/autotest_common.sh@936 -- # '[' -z 1694054 ']' 00:15:01.733 17:02:17 -- common/autotest_common.sh@940 -- # kill -0 1694054 00:15:01.733 17:02:17 -- common/autotest_common.sh@941 -- # uname 00:15:01.733 17:02:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.733 17:02:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1694054 00:15:01.733 17:02:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:01.733 17:02:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:01.733 17:02:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1694054' 00:15:01.733 killing process with pid 1694054 00:15:01.733 17:02:17 -- common/autotest_common.sh@955 -- # kill 1694054 00:15:01.733 Received shutdown signal, test time was about 1.000000 seconds 00:15:01.733 00:15:01.733 Latency(us) 00:15:01.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.733 =================================================================================================================== 00:15:01.733 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:01.733 17:02:17 -- common/autotest_common.sh@960 -- # wait 1694054 00:15:01.733 17:02:17 -- target/tls.sh@17 -- # nvmftestfini 00:15:01.733 17:02:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:01.733 17:02:17 -- nvmf/common.sh@117 -- # sync 00:15:01.733 17:02:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.733 17:02:17 -- nvmf/common.sh@120 -- # set +e 00:15:01.733 17:02:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.733 17:02:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.733 rmmod nvme_tcp 00:15:01.733 rmmod nvme_fabrics 00:15:01.998 rmmod nvme_keyring 00:15:01.998 17:02:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.998 17:02:17 -- nvmf/common.sh@124 -- # set -e 00:15:01.998 17:02:17 -- nvmf/common.sh@125 -- # return 0 00:15:01.998 17:02:17 -- nvmf/common.sh@478 -- # '[' -n 1693895 ']' 00:15:01.998 17:02:17 -- nvmf/common.sh@479 -- # killprocess 1693895 00:15:01.998 17:02:17 -- common/autotest_common.sh@936 -- # '[' -z 1693895 ']' 00:15:01.998 17:02:17 -- common/autotest_common.sh@940 -- # kill -0 1693895 00:15:01.998 17:02:17 -- common/autotest_common.sh@941 -- # uname 00:15:01.998 17:02:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.998 17:02:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1693895 00:15:01.998 17:02:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:01.998 17:02:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:01.998 17:02:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1693895' 00:15:01.998 killing process with pid 1693895 00:15:01.998 17:02:17 -- common/autotest_common.sh@955 -- # kill 1693895 00:15:01.998 17:02:17 -- common/autotest_common.sh@960 -- # wait 1693895 00:15:02.258 17:02:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:02.258 17:02:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:02.258 17:02:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:02.258 17:02:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.258 17:02:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:02.258 17:02:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.258 17:02:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.258 17:02:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.167 17:02:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:04.167 17:02:19 -- target/tls.sh@18 -- # rm -f /tmp/tmp.weTem5L5gd /tmp/tmp.UnrreeMArB /tmp/tmp.rWc4tKrkDB 00:15:04.167 00:15:04.167 real 1m22.876s 00:15:04.167 user 2m6.652s 00:15:04.167 sys 0m29.004s 00:15:04.167 17:02:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:04.167 17:02:19 -- common/autotest_common.sh@10 -- # set +x 00:15:04.167 ************************************ 00:15:04.167 END TEST nvmf_tls 00:15:04.167 ************************************ 00:15:04.167 17:02:19 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:04.167 17:02:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:04.167 17:02:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:04.167 17:02:19 -- common/autotest_common.sh@10 -- # set +x 00:15:04.426 ************************************ 00:15:04.426 START TEST nvmf_fips 00:15:04.426 ************************************ 00:15:04.426 17:02:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:04.426 * Looking for test storage... 00:15:04.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:15:04.426 17:02:19 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.426 17:02:19 -- nvmf/common.sh@7 -- # uname -s 00:15:04.426 17:02:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.426 17:02:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.426 17:02:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.426 17:02:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.426 17:02:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.426 17:02:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.426 17:02:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.426 17:02:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.426 17:02:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.426 17:02:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.426 17:02:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.426 17:02:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.426 17:02:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.426 17:02:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.426 17:02:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.426 17:02:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.426 17:02:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.426 17:02:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.426 17:02:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.426 17:02:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.426 17:02:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.426 17:02:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.426 17:02:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.426 17:02:20 -- paths/export.sh@5 -- # export PATH 00:15:04.426 17:02:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.426 17:02:20 -- nvmf/common.sh@47 -- # : 0 00:15:04.426 17:02:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.426 17:02:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.426 17:02:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.426 17:02:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.426 17:02:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.426 17:02:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.426 17:02:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.426 17:02:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.426 17:02:20 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.426 17:02:20 -- fips/fips.sh@89 -- # check_openssl_version 00:15:04.426 17:02:20 -- fips/fips.sh@83 -- # local target=3.0.0 00:15:04.426 17:02:20 -- fips/fips.sh@85 -- # openssl version 00:15:04.426 17:02:20 -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:04.426 17:02:20 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:04.426 17:02:20 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:04.426 17:02:20 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:04.426 17:02:20 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:04.426 17:02:20 -- scripts/common.sh@333 -- # IFS=.-: 00:15:04.426 17:02:20 -- scripts/common.sh@333 -- # read -ra ver1 00:15:04.426 17:02:20 -- scripts/common.sh@334 -- # IFS=.-: 00:15:04.426 17:02:20 -- scripts/common.sh@334 -- # read -ra ver2 00:15:04.426 17:02:20 -- scripts/common.sh@335 -- # local 'op=>=' 00:15:04.426 17:02:20 -- scripts/common.sh@337 -- # ver1_l=3 00:15:04.426 17:02:20 -- scripts/common.sh@338 -- # ver2_l=3 00:15:04.426 17:02:20 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:04.426 17:02:20 -- scripts/common.sh@341 -- # case "$op" in 00:15:04.426 17:02:20 -- scripts/common.sh@345 -- # : 1 00:15:04.426 17:02:20 -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:04.426 17:02:20 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.426 17:02:20 -- scripts/common.sh@362 -- # decimal 3 00:15:04.426 17:02:20 -- scripts/common.sh@350 -- # local d=3 00:15:04.426 17:02:20 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:04.426 17:02:20 -- scripts/common.sh@352 -- # echo 3 00:15:04.426 17:02:20 -- scripts/common.sh@362 -- # ver1[v]=3 00:15:04.426 17:02:20 -- scripts/common.sh@363 -- # decimal 3 00:15:04.426 17:02:20 -- scripts/common.sh@350 -- # local d=3 00:15:04.426 17:02:20 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:04.426 17:02:20 -- scripts/common.sh@352 -- # echo 3 00:15:04.426 17:02:20 -- scripts/common.sh@363 -- # ver2[v]=3 00:15:04.426 17:02:20 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:04.426 17:02:20 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:04.426 17:02:20 -- scripts/common.sh@361 -- # (( v++ )) 00:15:04.426 17:02:20 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.426 17:02:20 -- scripts/common.sh@362 -- # decimal 0 00:15:04.426 17:02:20 -- scripts/common.sh@350 -- # local d=0 00:15:04.426 17:02:20 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:04.426 17:02:20 -- scripts/common.sh@352 -- # echo 0 00:15:04.426 17:02:20 -- scripts/common.sh@362 -- # ver1[v]=0 00:15:04.426 17:02:20 -- scripts/common.sh@363 -- # decimal 0 00:15:04.427 17:02:20 -- scripts/common.sh@350 -- # local d=0 00:15:04.427 17:02:20 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:04.427 17:02:20 -- scripts/common.sh@352 -- # echo 0 00:15:04.427 17:02:20 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:04.427 17:02:20 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:04.427 17:02:20 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:04.427 17:02:20 -- scripts/common.sh@361 -- # (( v++ )) 00:15:04.427 17:02:20 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.427 17:02:20 -- scripts/common.sh@362 -- # decimal 9 00:15:04.427 17:02:20 -- scripts/common.sh@350 -- # local d=9 00:15:04.427 17:02:20 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:04.427 17:02:20 -- scripts/common.sh@352 -- # echo 9 00:15:04.427 17:02:20 -- scripts/common.sh@362 -- # ver1[v]=9 00:15:04.427 17:02:20 -- scripts/common.sh@363 -- # decimal 0 00:15:04.427 17:02:20 -- scripts/common.sh@350 -- # local d=0 00:15:04.427 17:02:20 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:04.427 17:02:20 -- scripts/common.sh@352 -- # echo 0 00:15:04.427 17:02:20 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:04.427 17:02:20 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:04.427 17:02:20 -- scripts/common.sh@364 -- # return 0 00:15:04.427 17:02:20 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:04.427 17:02:20 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:04.427 17:02:20 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:04.427 17:02:20 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:04.427 17:02:20 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:04.427 17:02:20 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:04.427 17:02:20 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:04.427 17:02:20 -- fips/fips.sh@113 -- # build_openssl_config 00:15:04.427 17:02:20 -- fips/fips.sh@37 -- # cat 00:15:04.427 17:02:20 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:04.427 17:02:20 -- fips/fips.sh@58 -- # cat - 00:15:04.427 17:02:20 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:04.427 17:02:20 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:04.427 17:02:20 -- fips/fips.sh@116 -- # mapfile -t providers 00:15:04.427 17:02:20 -- fips/fips.sh@116 -- # openssl list -providers 00:15:04.427 17:02:20 -- fips/fips.sh@116 -- # grep name 00:15:04.427 17:02:20 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:04.427 17:02:20 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:04.427 17:02:20 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:04.427 17:02:20 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:04.427 17:02:20 -- fips/fips.sh@127 -- # : 00:15:04.427 17:02:20 -- common/autotest_common.sh@638 -- # local es=0 00:15:04.427 17:02:20 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:04.427 17:02:20 -- common/autotest_common.sh@626 -- # local arg=openssl 00:15:04.427 17:02:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:04.427 17:02:20 -- common/autotest_common.sh@630 -- # type -t openssl 00:15:04.427 17:02:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:04.427 17:02:20 -- common/autotest_common.sh@632 -- # type -P openssl 00:15:04.427 17:02:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:04.427 17:02:20 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:15:04.427 17:02:20 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:15:04.427 17:02:20 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:15:04.685 Error setting digest 00:15:04.685 0072D1C2347F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:04.685 0072D1C2347F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:04.685 17:02:20 -- common/autotest_common.sh@641 -- # es=1 00:15:04.685 17:02:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:04.685 17:02:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:04.685 17:02:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:04.685 17:02:20 -- fips/fips.sh@130 -- # nvmftestinit 00:15:04.685 17:02:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:04.685 17:02:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.685 17:02:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:04.685 17:02:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:04.685 17:02:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:04.685 17:02:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.685 17:02:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.685 17:02:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.685 17:02:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:04.685 17:02:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:04.685 17:02:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:04.685 17:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:06.588 17:02:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:06.588 17:02:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:06.588 17:02:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:06.588 17:02:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:06.588 17:02:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:06.588 17:02:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:06.588 17:02:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:06.588 17:02:22 -- nvmf/common.sh@295 -- # net_devs=() 00:15:06.588 17:02:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:06.588 17:02:22 -- nvmf/common.sh@296 -- # e810=() 00:15:06.588 17:02:22 -- nvmf/common.sh@296 -- # local -ga e810 00:15:06.588 17:02:22 -- nvmf/common.sh@297 -- # x722=() 00:15:06.588 17:02:22 -- nvmf/common.sh@297 -- # local -ga x722 00:15:06.588 17:02:22 -- nvmf/common.sh@298 -- # mlx=() 00:15:06.588 17:02:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:06.588 17:02:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.588 17:02:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.588 17:02:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.588 17:02:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.588 17:02:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.588 17:02:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.588 17:02:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.588 17:02:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.588 17:02:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.588 17:02:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.588 17:02:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.588 17:02:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:06.588 17:02:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:06.588 17:02:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:06.588 17:02:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.588 17:02:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:06.588 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:06.588 17:02:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.588 17:02:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:06.588 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:06.588 17:02:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:06.588 17:02:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.588 17:02:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.588 17:02:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:06.588 17:02:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.588 17:02:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:06.588 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:06.588 17:02:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.588 17:02:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.588 17:02:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.588 17:02:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:06.588 17:02:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.588 17:02:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:06.588 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:06.588 17:02:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.588 17:02:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:06.588 17:02:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:06.588 17:02:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:06.588 17:02:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.588 17:02:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.588 17:02:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.588 17:02:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:06.588 17:02:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.588 17:02:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.588 17:02:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:06.588 17:02:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.588 17:02:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.588 17:02:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:06.588 17:02:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:06.588 17:02:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.588 17:02:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.588 17:02:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.588 17:02:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.588 17:02:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:06.588 17:02:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.588 17:02:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.588 17:02:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.588 17:02:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:06.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:15:06.588 00:15:06.588 --- 10.0.0.2 ping statistics --- 00:15:06.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.588 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:15:06.588 17:02:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:15:06.588 00:15:06.588 --- 10.0.0.1 ping statistics --- 00:15:06.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.588 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:15:06.588 17:02:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.588 17:02:22 -- nvmf/common.sh@411 -- # return 0 00:15:06.588 17:02:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:06.588 17:02:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.588 17:02:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:06.588 17:02:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.588 17:02:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:06.588 17:02:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:06.588 17:02:22 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:06.588 17:02:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:06.588 17:02:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:06.588 17:02:22 -- common/autotest_common.sh@10 -- # set +x 00:15:06.849 17:02:22 -- nvmf/common.sh@470 -- # nvmfpid=1696390 00:15:06.849 17:02:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:06.849 17:02:22 -- nvmf/common.sh@471 -- # waitforlisten 1696390 00:15:06.849 17:02:22 -- common/autotest_common.sh@817 -- # '[' -z 1696390 ']' 00:15:06.849 17:02:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.849 17:02:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:06.849 17:02:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.849 17:02:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:06.849 17:02:22 -- common/autotest_common.sh@10 -- # set +x 00:15:06.849 [2024-04-18 17:02:22.364304] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:06.849 [2024-04-18 17:02:22.364409] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.849 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.849 [2024-04-18 17:02:22.431972] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.849 [2024-04-18 17:02:22.544759] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.849 [2024-04-18 17:02:22.544857] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.849 [2024-04-18 17:02:22.544873] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.849 [2024-04-18 17:02:22.544887] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.849 [2024-04-18 17:02:22.544898] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.849 [2024-04-18 17:02:22.544934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.784 17:02:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:07.784 17:02:23 -- common/autotest_common.sh@850 -- # return 0 00:15:07.784 17:02:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:07.784 17:02:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:07.784 17:02:23 -- common/autotest_common.sh@10 -- # set +x 00:15:07.784 17:02:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.784 17:02:23 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:07.784 17:02:23 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:07.784 17:02:23 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:07.784 17:02:23 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:07.784 17:02:23 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:07.784 17:02:23 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:07.784 17:02:23 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:07.784 17:02:23 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:08.042 [2024-04-18 17:02:23.561929] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.042 [2024-04-18 17:02:23.577917] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:08.042 [2024-04-18 17:02:23.578150] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.042 [2024-04-18 17:02:23.609327] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:08.042 malloc0 00:15:08.042 17:02:23 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:08.042 17:02:23 -- fips/fips.sh@147 -- # bdevperf_pid=1696581 00:15:08.042 17:02:23 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:08.042 17:02:23 -- fips/fips.sh@148 -- # waitforlisten 1696581 /var/tmp/bdevperf.sock 00:15:08.042 17:02:23 -- common/autotest_common.sh@817 -- # '[' -z 1696581 ']' 00:15:08.042 17:02:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.042 17:02:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:08.042 17:02:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.042 17:02:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:08.042 17:02:23 -- common/autotest_common.sh@10 -- # set +x 00:15:08.042 [2024-04-18 17:02:23.692757] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:08.042 [2024-04-18 17:02:23.692832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1696581 ] 00:15:08.042 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.300 [2024-04-18 17:02:23.750626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.300 [2024-04-18 17:02:23.855899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.300 17:02:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:08.300 17:02:23 -- common/autotest_common.sh@850 -- # return 0 00:15:08.300 17:02:23 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:08.559 [2024-04-18 17:02:24.182638] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.559 [2024-04-18 17:02:24.182815] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:08.559 TLSTESTn1 00:15:08.818 17:02:24 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:08.818 Running I/O for 10 seconds... 00:15:18.807 00:15:18.807 Latency(us) 00:15:18.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.807 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:18.807 Verification LBA range: start 0x0 length 0x2000 00:15:18.807 TLSTESTn1 : 10.03 3437.68 13.43 0.00 0.00 37164.69 6310.87 59419.31 00:15:18.807 =================================================================================================================== 00:15:18.807 Total : 3437.68 13.43 0.00 0.00 37164.69 6310.87 59419.31 00:15:18.807 0 00:15:18.807 17:02:34 -- fips/fips.sh@1 -- # cleanup 00:15:18.807 17:02:34 -- fips/fips.sh@15 -- # process_shm --id 0 00:15:18.807 17:02:34 -- common/autotest_common.sh@794 -- # type=--id 00:15:18.807 17:02:34 -- common/autotest_common.sh@795 -- # id=0 00:15:18.807 17:02:34 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:18.807 17:02:34 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:18.807 17:02:34 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:18.807 17:02:34 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:18.807 17:02:34 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:18.807 17:02:34 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:18.807 nvmf_trace.0 00:15:18.807 17:02:34 -- common/autotest_common.sh@809 -- # return 0 00:15:18.807 17:02:34 -- fips/fips.sh@16 -- # killprocess 1696581 00:15:18.807 17:02:34 -- common/autotest_common.sh@936 -- # '[' -z 1696581 ']' 00:15:18.807 17:02:34 -- common/autotest_common.sh@940 -- # kill -0 1696581 00:15:18.807 17:02:34 -- common/autotest_common.sh@941 -- # uname 00:15:18.807 17:02:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.807 17:02:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1696581 00:15:19.065 17:02:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:19.065 17:02:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:19.065 17:02:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1696581' 00:15:19.065 killing process with pid 1696581 00:15:19.065 17:02:34 -- common/autotest_common.sh@955 -- # kill 1696581 00:15:19.065 Received shutdown signal, test time was about 10.000000 seconds 00:15:19.065 00:15:19.065 Latency(us) 00:15:19.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.065 =================================================================================================================== 00:15:19.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.065 [2024-04-18 17:02:34.528504] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:19.065 17:02:34 -- common/autotest_common.sh@960 -- # wait 1696581 00:15:19.323 17:02:34 -- fips/fips.sh@17 -- # nvmftestfini 00:15:19.324 17:02:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:19.324 17:02:34 -- nvmf/common.sh@117 -- # sync 00:15:19.324 17:02:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.324 17:02:34 -- nvmf/common.sh@120 -- # set +e 00:15:19.324 17:02:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.324 17:02:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.324 rmmod nvme_tcp 00:15:19.324 rmmod nvme_fabrics 00:15:19.324 rmmod nvme_keyring 00:15:19.324 17:02:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.324 17:02:34 -- nvmf/common.sh@124 -- # set -e 00:15:19.324 17:02:34 -- nvmf/common.sh@125 -- # return 0 00:15:19.324 17:02:34 -- nvmf/common.sh@478 -- # '[' -n 1696390 ']' 00:15:19.324 17:02:34 -- nvmf/common.sh@479 -- # killprocess 1696390 00:15:19.324 17:02:34 -- common/autotest_common.sh@936 -- # '[' -z 1696390 ']' 00:15:19.324 17:02:34 -- common/autotest_common.sh@940 -- # kill -0 1696390 00:15:19.324 17:02:34 -- common/autotest_common.sh@941 -- # uname 00:15:19.324 17:02:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:19.324 17:02:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1696390 00:15:19.324 17:02:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:19.324 17:02:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:19.324 17:02:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1696390' 00:15:19.324 killing process with pid 1696390 00:15:19.324 17:02:34 -- common/autotest_common.sh@955 -- # kill 1696390 00:15:19.324 [2024-04-18 17:02:34.877060] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:19.324 17:02:34 -- common/autotest_common.sh@960 -- # wait 1696390 00:15:19.582 17:02:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:19.582 17:02:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:19.582 17:02:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:19.582 17:02:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.582 17:02:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.582 17:02:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.582 17:02:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.582 17:02:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.491 17:02:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:21.491 17:02:37 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:21.491 00:15:21.491 real 0m17.249s 00:15:21.491 user 0m22.404s 00:15:21.491 sys 0m5.308s 00:15:21.491 17:02:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:21.491 17:02:37 -- common/autotest_common.sh@10 -- # set +x 00:15:21.491 ************************************ 00:15:21.491 END TEST nvmf_fips 00:15:21.491 ************************************ 00:15:21.750 17:02:37 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:15:21.750 17:02:37 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:15:21.750 17:02:37 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:15:21.750 17:02:37 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:15:21.750 17:02:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.750 17:02:37 -- common/autotest_common.sh@10 -- # set +x 00:15:23.656 17:02:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:23.656 17:02:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:23.656 17:02:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:23.656 17:02:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:23.656 17:02:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:23.656 17:02:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:23.656 17:02:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:23.656 17:02:39 -- nvmf/common.sh@295 -- # net_devs=() 00:15:23.656 17:02:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:23.656 17:02:39 -- nvmf/common.sh@296 -- # e810=() 00:15:23.656 17:02:39 -- nvmf/common.sh@296 -- # local -ga e810 00:15:23.656 17:02:39 -- nvmf/common.sh@297 -- # x722=() 00:15:23.656 17:02:39 -- nvmf/common.sh@297 -- # local -ga x722 00:15:23.656 17:02:39 -- nvmf/common.sh@298 -- # mlx=() 00:15:23.656 17:02:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:23.656 17:02:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:23.656 17:02:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:23.656 17:02:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:23.656 17:02:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:23.656 17:02:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:23.657 17:02:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:23.657 17:02:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:23.657 17:02:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:23.657 17:02:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:23.657 17:02:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:23.657 17:02:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:23.657 17:02:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:23.657 17:02:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:23.657 17:02:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:23.657 17:02:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:23.657 17:02:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:23.657 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:23.657 17:02:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:23.657 17:02:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:23.657 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:23.657 17:02:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:23.657 17:02:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:23.657 17:02:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.657 17:02:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:23.657 17:02:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.657 17:02:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:23.657 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:23.657 17:02:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.657 17:02:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:23.657 17:02:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.657 17:02:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:23.657 17:02:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.657 17:02:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:23.657 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:23.657 17:02:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.657 17:02:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:23.657 17:02:39 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:23.657 17:02:39 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:15:23.657 17:02:39 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:15:23.657 17:02:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:23.657 17:02:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:23.657 17:02:39 -- common/autotest_common.sh@10 -- # set +x 00:15:23.657 ************************************ 00:15:23.657 START TEST nvmf_perf_adq 00:15:23.657 ************************************ 00:15:23.657 17:02:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:15:23.657 * Looking for test storage... 00:15:23.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:23.657 17:02:39 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.657 17:02:39 -- nvmf/common.sh@7 -- # uname -s 00:15:23.657 17:02:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.657 17:02:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.657 17:02:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.657 17:02:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.657 17:02:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.657 17:02:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.657 17:02:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.657 17:02:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.657 17:02:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.657 17:02:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.657 17:02:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.657 17:02:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.657 17:02:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.657 17:02:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.657 17:02:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.657 17:02:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.657 17:02:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.657 17:02:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.657 17:02:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.657 17:02:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.657 17:02:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.657 17:02:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.657 17:02:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.657 17:02:39 -- paths/export.sh@5 -- # export PATH 00:15:23.657 17:02:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.657 17:02:39 -- nvmf/common.sh@47 -- # : 0 00:15:23.657 17:02:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:23.657 17:02:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:23.657 17:02:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.657 17:02:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.657 17:02:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.657 17:02:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:23.657 17:02:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:23.657 17:02:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:23.657 17:02:39 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:15:23.657 17:02:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:23.657 17:02:39 -- common/autotest_common.sh@10 -- # set +x 00:15:26.194 17:02:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:26.194 17:02:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:26.194 17:02:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:26.194 17:02:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:26.194 17:02:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:26.194 17:02:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:26.194 17:02:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:26.194 17:02:41 -- nvmf/common.sh@295 -- # net_devs=() 00:15:26.194 17:02:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:26.194 17:02:41 -- nvmf/common.sh@296 -- # e810=() 00:15:26.194 17:02:41 -- nvmf/common.sh@296 -- # local -ga e810 00:15:26.194 17:02:41 -- nvmf/common.sh@297 -- # x722=() 00:15:26.194 17:02:41 -- nvmf/common.sh@297 -- # local -ga x722 00:15:26.194 17:02:41 -- nvmf/common.sh@298 -- # mlx=() 00:15:26.194 17:02:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:26.194 17:02:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:26.194 17:02:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:26.194 17:02:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:26.194 17:02:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:26.194 17:02:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:26.194 17:02:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:26.194 17:02:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:26.194 17:02:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:26.194 17:02:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:26.194 17:02:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:26.194 17:02:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:26.194 17:02:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:26.194 17:02:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:26.194 17:02:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:26.194 17:02:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.194 17:02:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:26.194 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:26.194 17:02:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.194 17:02:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:26.194 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:26.194 17:02:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:26.194 17:02:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:26.194 17:02:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.194 17:02:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.194 17:02:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:26.194 17:02:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.194 17:02:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:26.194 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:26.194 17:02:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.194 17:02:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.194 17:02:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.194 17:02:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:26.194 17:02:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.194 17:02:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:26.194 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:26.194 17:02:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.194 17:02:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:26.194 17:02:41 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:26.194 17:02:41 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:15:26.194 17:02:41 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:26.194 17:02:41 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:15:26.194 17:02:41 -- target/perf_adq.sh@52 -- # rmmod ice 00:15:26.454 17:02:42 -- target/perf_adq.sh@53 -- # modprobe ice 00:15:27.839 17:02:43 -- target/perf_adq.sh@54 -- # sleep 5 00:15:33.156 17:02:48 -- target/perf_adq.sh@67 -- # nvmftestinit 00:15:33.156 17:02:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:33.156 17:02:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.156 17:02:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:33.156 17:02:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:33.156 17:02:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:33.156 17:02:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.156 17:02:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.156 17:02:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.156 17:02:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:33.156 17:02:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:33.156 17:02:48 -- common/autotest_common.sh@10 -- # set +x 00:15:33.156 17:02:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:33.156 17:02:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:33.156 17:02:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:33.156 17:02:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:33.156 17:02:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:33.156 17:02:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:33.156 17:02:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:33.156 17:02:48 -- nvmf/common.sh@295 -- # net_devs=() 00:15:33.156 17:02:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:33.156 17:02:48 -- nvmf/common.sh@296 -- # e810=() 00:15:33.156 17:02:48 -- nvmf/common.sh@296 -- # local -ga e810 00:15:33.156 17:02:48 -- nvmf/common.sh@297 -- # x722=() 00:15:33.156 17:02:48 -- nvmf/common.sh@297 -- # local -ga x722 00:15:33.156 17:02:48 -- nvmf/common.sh@298 -- # mlx=() 00:15:33.156 17:02:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:33.156 17:02:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.156 17:02:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.156 17:02:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.156 17:02:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.156 17:02:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.156 17:02:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.156 17:02:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.156 17:02:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.156 17:02:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.156 17:02:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.156 17:02:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.156 17:02:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:33.156 17:02:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:33.156 17:02:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:33.156 17:02:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.156 17:02:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:33.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:33.156 17:02:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.156 17:02:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:33.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:33.156 17:02:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:33.156 17:02:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.156 17:02:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.156 17:02:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:33.156 17:02:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.156 17:02:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:33.156 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:33.156 17:02:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.156 17:02:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.156 17:02:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.156 17:02:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:33.156 17:02:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.156 17:02:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:33.156 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:33.156 17:02:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.156 17:02:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:33.156 17:02:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:33.156 17:02:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:33.156 17:02:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:33.156 17:02:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.156 17:02:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.156 17:02:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.156 17:02:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:33.156 17:02:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.156 17:02:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.156 17:02:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:33.156 17:02:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.156 17:02:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.156 17:02:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:33.156 17:02:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:33.156 17:02:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.156 17:02:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.156 17:02:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.156 17:02:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.156 17:02:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:33.156 17:02:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.156 17:02:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.157 17:02:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.157 17:02:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:33.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:15:33.157 00:15:33.157 --- 10.0.0.2 ping statistics --- 00:15:33.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.157 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:15:33.157 17:02:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:15:33.157 00:15:33.157 --- 10.0.0.1 ping statistics --- 00:15:33.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.157 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:33.157 17:02:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.157 17:02:48 -- nvmf/common.sh@411 -- # return 0 00:15:33.157 17:02:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:33.157 17:02:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.157 17:02:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:33.157 17:02:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:33.157 17:02:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.157 17:02:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:33.157 17:02:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:33.157 17:02:48 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:33.157 17:02:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:33.157 17:02:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:33.157 17:02:48 -- common/autotest_common.sh@10 -- # set +x 00:15:33.157 17:02:48 -- nvmf/common.sh@470 -- # nvmfpid=1702332 00:15:33.157 17:02:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:33.157 17:02:48 -- nvmf/common.sh@471 -- # waitforlisten 1702332 00:15:33.157 17:02:48 -- common/autotest_common.sh@817 -- # '[' -z 1702332 ']' 00:15:33.157 17:02:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.157 17:02:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:33.157 17:02:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.157 17:02:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:33.157 17:02:48 -- common/autotest_common.sh@10 -- # set +x 00:15:33.157 [2024-04-18 17:02:48.666285] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:33.157 [2024-04-18 17:02:48.666373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.157 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.157 [2024-04-18 17:02:48.731521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.157 [2024-04-18 17:02:48.843033] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.157 [2024-04-18 17:02:48.843084] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.157 [2024-04-18 17:02:48.843112] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.157 [2024-04-18 17:02:48.843124] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.157 [2024-04-18 17:02:48.843134] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.157 [2024-04-18 17:02:48.843266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.157 [2024-04-18 17:02:48.843331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.157 [2024-04-18 17:02:48.843404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.157 [2024-04-18 17:02:48.843409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.416 17:02:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:33.416 17:02:48 -- common/autotest_common.sh@850 -- # return 0 00:15:33.416 17:02:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:33.416 17:02:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:33.416 17:02:48 -- common/autotest_common.sh@10 -- # set +x 00:15:33.416 17:02:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.416 17:02:48 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:15:33.416 17:02:48 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:15:33.416 17:02:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:33.416 17:02:48 -- common/autotest_common.sh@10 -- # set +x 00:15:33.416 17:02:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:33.416 17:02:48 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:15:33.416 17:02:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:33.416 17:02:48 -- common/autotest_common.sh@10 -- # set +x 00:15:33.416 17:02:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:33.416 17:02:49 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:15:33.416 17:02:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:33.416 17:02:49 -- common/autotest_common.sh@10 -- # set +x 00:15:33.416 [2024-04-18 17:02:49.023249] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.416 17:02:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:33.416 17:02:49 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:33.416 17:02:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:33.416 17:02:49 -- common/autotest_common.sh@10 -- # set +x 00:15:33.416 Malloc1 00:15:33.416 17:02:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:33.416 17:02:49 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:33.416 17:02:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:33.416 17:02:49 -- common/autotest_common.sh@10 -- # set +x 00:15:33.416 17:02:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:33.416 17:02:49 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:33.416 17:02:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:33.416 17:02:49 -- common/autotest_common.sh@10 -- # set +x 00:15:33.416 17:02:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:33.416 17:02:49 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.416 17:02:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:33.416 17:02:49 -- common/autotest_common.sh@10 -- # set +x 00:15:33.416 [2024-04-18 17:02:49.074504] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.416 17:02:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:33.416 17:02:49 -- target/perf_adq.sh@73 -- # perfpid=1702367 00:15:33.416 17:02:49 -- target/perf_adq.sh@74 -- # sleep 2 00:15:33.416 17:02:49 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:33.416 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.949 17:02:51 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:15:35.949 17:02:51 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:15:35.949 17:02:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:35.949 17:02:51 -- target/perf_adq.sh@76 -- # wc -l 00:15:35.949 17:02:51 -- common/autotest_common.sh@10 -- # set +x 00:15:35.949 17:02:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:35.949 17:02:51 -- target/perf_adq.sh@76 -- # count=4 00:15:35.949 17:02:51 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:15:35.949 17:02:51 -- target/perf_adq.sh@81 -- # wait 1702367 00:15:44.072 Initializing NVMe Controllers 00:15:44.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:44.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:15:44.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:15:44.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:15:44.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:15:44.072 Initialization complete. Launching workers. 00:15:44.072 ======================================================== 00:15:44.072 Latency(us) 00:15:44.072 Device Information : IOPS MiB/s Average min max 00:15:44.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10418.10 40.70 6143.33 2475.30 8981.47 00:15:44.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10499.10 41.01 6095.61 2746.95 8243.36 00:15:44.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9401.20 36.72 6808.11 2578.41 9915.71 00:15:44.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10039.60 39.22 6376.32 2894.75 9144.28 00:15:44.072 ======================================================== 00:15:44.072 Total : 40358.00 157.65 6343.73 2475.30 9915.71 00:15:44.072 00:15:44.072 17:02:59 -- target/perf_adq.sh@82 -- # nvmftestfini 00:15:44.072 17:02:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:44.072 17:02:59 -- nvmf/common.sh@117 -- # sync 00:15:44.072 17:02:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.072 17:02:59 -- nvmf/common.sh@120 -- # set +e 00:15:44.072 17:02:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.072 17:02:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.072 rmmod nvme_tcp 00:15:44.072 rmmod nvme_fabrics 00:15:44.072 rmmod nvme_keyring 00:15:44.072 17:02:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.072 17:02:59 -- nvmf/common.sh@124 -- # set -e 00:15:44.072 17:02:59 -- nvmf/common.sh@125 -- # return 0 00:15:44.072 17:02:59 -- nvmf/common.sh@478 -- # '[' -n 1702332 ']' 00:15:44.072 17:02:59 -- nvmf/common.sh@479 -- # killprocess 1702332 00:15:44.072 17:02:59 -- common/autotest_common.sh@936 -- # '[' -z 1702332 ']' 00:15:44.072 17:02:59 -- common/autotest_common.sh@940 -- # kill -0 1702332 00:15:44.072 17:02:59 -- common/autotest_common.sh@941 -- # uname 00:15:44.072 17:02:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.072 17:02:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1702332 00:15:44.072 17:02:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:44.072 17:02:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:44.072 17:02:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1702332' 00:15:44.072 killing process with pid 1702332 00:15:44.072 17:02:59 -- common/autotest_common.sh@955 -- # kill 1702332 00:15:44.072 17:02:59 -- common/autotest_common.sh@960 -- # wait 1702332 00:15:44.072 17:02:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:44.072 17:02:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:44.072 17:02:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:44.072 17:02:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.072 17:02:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.072 17:02:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.072 17:02:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.072 17:02:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.974 17:03:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.974 17:03:01 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:15:45.974 17:03:01 -- target/perf_adq.sh@52 -- # rmmod ice 00:15:46.912 17:03:02 -- target/perf_adq.sh@53 -- # modprobe ice 00:15:48.291 17:03:03 -- target/perf_adq.sh@54 -- # sleep 5 00:15:53.567 17:03:08 -- target/perf_adq.sh@87 -- # nvmftestinit 00:15:53.567 17:03:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:53.567 17:03:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.567 17:03:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:53.567 17:03:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:53.567 17:03:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:53.567 17:03:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.567 17:03:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.567 17:03:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.567 17:03:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:53.567 17:03:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.567 17:03:08 -- common/autotest_common.sh@10 -- # set +x 00:15:53.567 17:03:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:53.567 17:03:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:53.567 17:03:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:53.567 17:03:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:53.567 17:03:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:53.567 17:03:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:53.567 17:03:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:53.567 17:03:08 -- nvmf/common.sh@295 -- # net_devs=() 00:15:53.567 17:03:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:53.567 17:03:08 -- nvmf/common.sh@296 -- # e810=() 00:15:53.567 17:03:08 -- nvmf/common.sh@296 -- # local -ga e810 00:15:53.567 17:03:08 -- nvmf/common.sh@297 -- # x722=() 00:15:53.567 17:03:08 -- nvmf/common.sh@297 -- # local -ga x722 00:15:53.567 17:03:08 -- nvmf/common.sh@298 -- # mlx=() 00:15:53.567 17:03:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:53.567 17:03:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:53.567 17:03:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:53.567 17:03:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:53.567 17:03:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:53.567 17:03:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:53.567 17:03:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:53.567 17:03:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:53.567 17:03:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:53.567 17:03:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:53.567 17:03:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:53.567 17:03:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:53.567 17:03:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:53.567 17:03:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:53.567 17:03:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:53.567 17:03:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.567 17:03:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:53.567 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:53.567 17:03:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.567 17:03:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:53.567 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:53.567 17:03:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:53.567 17:03:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:53.567 17:03:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.567 17:03:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.567 17:03:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:53.567 17:03:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.567 17:03:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:53.567 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:53.567 17:03:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.567 17:03:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.567 17:03:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.567 17:03:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:53.567 17:03:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.567 17:03:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:53.567 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:53.567 17:03:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.567 17:03:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:53.568 17:03:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:53.568 17:03:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:53.568 17:03:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:53.568 17:03:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:53.568 17:03:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.568 17:03:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.568 17:03:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:53.568 17:03:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:53.568 17:03:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:53.568 17:03:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:53.568 17:03:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:53.568 17:03:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:53.568 17:03:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.568 17:03:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:53.568 17:03:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:53.568 17:03:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:53.568 17:03:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:53.568 17:03:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:53.568 17:03:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:53.568 17:03:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:53.568 17:03:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:53.568 17:03:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:53.568 17:03:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:53.568 17:03:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:53.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:15:53.568 00:15:53.568 --- 10.0.0.2 ping statistics --- 00:15:53.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.568 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:53.568 17:03:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:53.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:15:53.568 00:15:53.568 --- 10.0.0.1 ping statistics --- 00:15:53.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.568 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:53.568 17:03:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.568 17:03:08 -- nvmf/common.sh@411 -- # return 0 00:15:53.568 17:03:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:53.568 17:03:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.568 17:03:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:53.568 17:03:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:53.568 17:03:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.568 17:03:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:53.568 17:03:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:53.568 17:03:08 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:15:53.568 17:03:08 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:15:53.568 17:03:08 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:15:53.568 17:03:08 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:15:53.568 net.core.busy_poll = 1 00:15:53.568 17:03:08 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:15:53.568 net.core.busy_read = 1 00:15:53.568 17:03:08 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:15:53.568 17:03:08 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:15:53.568 17:03:09 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:15:53.568 17:03:09 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:15:53.568 17:03:09 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:15:53.568 17:03:09 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:53.568 17:03:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:53.568 17:03:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:53.568 17:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.568 17:03:09 -- nvmf/common.sh@470 -- # nvmfpid=1705579 00:15:53.568 17:03:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:53.568 17:03:09 -- nvmf/common.sh@471 -- # waitforlisten 1705579 00:15:53.568 17:03:09 -- common/autotest_common.sh@817 -- # '[' -z 1705579 ']' 00:15:53.568 17:03:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.568 17:03:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:53.568 17:03:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.568 17:03:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:53.568 17:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.568 [2024-04-18 17:03:09.102442] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:53.568 [2024-04-18 17:03:09.102531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.568 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.568 [2024-04-18 17:03:09.166694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.826 [2024-04-18 17:03:09.273781] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.826 [2024-04-18 17:03:09.273834] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.826 [2024-04-18 17:03:09.273848] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.826 [2024-04-18 17:03:09.273860] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.826 [2024-04-18 17:03:09.273870] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.826 [2024-04-18 17:03:09.273923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.826 [2024-04-18 17:03:09.273973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.826 [2024-04-18 17:03:09.274022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.826 [2024-04-18 17:03:09.274025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.826 17:03:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:53.826 17:03:09 -- common/autotest_common.sh@850 -- # return 0 00:15:53.826 17:03:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:53.826 17:03:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:53.826 17:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.826 17:03:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.826 17:03:09 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:15:53.826 17:03:09 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:15:53.826 17:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.826 17:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.826 17:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.826 17:03:09 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:15:53.826 17:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.826 17:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.826 17:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.826 17:03:09 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:15:53.826 17:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.826 17:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.826 [2024-04-18 17:03:09.443984] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.826 17:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.826 17:03:09 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:53.826 17:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.826 17:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.826 Malloc1 00:15:53.826 17:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.826 17:03:09 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:53.826 17:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.826 17:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.826 17:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.826 17:03:09 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:53.826 17:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.826 17:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.826 17:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.826 17:03:09 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.826 17:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.826 17:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.826 [2024-04-18 17:03:09.495042] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.826 17:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.826 17:03:09 -- target/perf_adq.sh@94 -- # perfpid=1705624 00:15:53.826 17:03:09 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:53.826 17:03:09 -- target/perf_adq.sh@95 -- # sleep 2 00:15:53.826 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.359 17:03:11 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:15:56.359 17:03:11 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:15:56.359 17:03:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.359 17:03:11 -- target/perf_adq.sh@97 -- # wc -l 00:15:56.359 17:03:11 -- common/autotest_common.sh@10 -- # set +x 00:15:56.359 17:03:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.359 17:03:11 -- target/perf_adq.sh@97 -- # count=2 00:15:56.359 17:03:11 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:15:56.359 17:03:11 -- target/perf_adq.sh@103 -- # wait 1705624 00:16:04.531 Initializing NVMe Controllers 00:16:04.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:04.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:04.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:04.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:04.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:04.531 Initialization complete. Launching workers. 00:16:04.531 ======================================================== 00:16:04.531 Latency(us) 00:16:04.531 Device Information : IOPS MiB/s Average min max 00:16:04.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4416.10 17.25 14491.96 2103.56 61082.59 00:16:04.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4867.30 19.01 13148.26 2170.99 60723.27 00:16:04.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4432.30 17.31 14446.28 1646.47 62654.06 00:16:04.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13355.70 52.17 4791.82 1511.97 6914.12 00:16:04.531 ======================================================== 00:16:04.531 Total : 27071.39 105.75 9457.32 1511.97 62654.06 00:16:04.531 00:16:04.531 17:03:19 -- target/perf_adq.sh@104 -- # nvmftestfini 00:16:04.531 17:03:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:04.531 17:03:19 -- nvmf/common.sh@117 -- # sync 00:16:04.531 17:03:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:04.531 17:03:19 -- nvmf/common.sh@120 -- # set +e 00:16:04.531 17:03:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:04.531 17:03:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:04.531 rmmod nvme_tcp 00:16:04.531 rmmod nvme_fabrics 00:16:04.531 rmmod nvme_keyring 00:16:04.531 17:03:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:04.531 17:03:19 -- nvmf/common.sh@124 -- # set -e 00:16:04.531 17:03:19 -- nvmf/common.sh@125 -- # return 0 00:16:04.531 17:03:19 -- nvmf/common.sh@478 -- # '[' -n 1705579 ']' 00:16:04.531 17:03:19 -- nvmf/common.sh@479 -- # killprocess 1705579 00:16:04.531 17:03:19 -- common/autotest_common.sh@936 -- # '[' -z 1705579 ']' 00:16:04.531 17:03:19 -- common/autotest_common.sh@940 -- # kill -0 1705579 00:16:04.531 17:03:19 -- common/autotest_common.sh@941 -- # uname 00:16:04.531 17:03:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:04.531 17:03:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1705579 00:16:04.531 17:03:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:04.531 17:03:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:04.531 17:03:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1705579' 00:16:04.531 killing process with pid 1705579 00:16:04.531 17:03:19 -- common/autotest_common.sh@955 -- # kill 1705579 00:16:04.531 17:03:19 -- common/autotest_common.sh@960 -- # wait 1705579 00:16:04.531 17:03:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:04.531 17:03:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:04.531 17:03:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:04.531 17:03:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:04.531 17:03:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:04.531 17:03:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.531 17:03:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.531 17:03:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.439 17:03:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:06.439 17:03:22 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:16:06.439 00:16:06.439 real 0m42.834s 00:16:06.439 user 2m34.842s 00:16:06.439 sys 0m11.136s 00:16:06.439 17:03:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:06.439 17:03:22 -- common/autotest_common.sh@10 -- # set +x 00:16:06.439 ************************************ 00:16:06.439 END TEST nvmf_perf_adq 00:16:06.439 ************************************ 00:16:06.439 17:03:22 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:06.439 17:03:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:06.439 17:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:06.439 17:03:22 -- common/autotest_common.sh@10 -- # set +x 00:16:06.697 ************************************ 00:16:06.697 START TEST nvmf_shutdown 00:16:06.697 ************************************ 00:16:06.697 17:03:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:06.697 * Looking for test storage... 00:16:06.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.697 17:03:22 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.697 17:03:22 -- nvmf/common.sh@7 -- # uname -s 00:16:06.697 17:03:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.698 17:03:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.698 17:03:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.698 17:03:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.698 17:03:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.698 17:03:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.698 17:03:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.698 17:03:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.698 17:03:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.698 17:03:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.698 17:03:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.698 17:03:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.698 17:03:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.698 17:03:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.698 17:03:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.698 17:03:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.698 17:03:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.698 17:03:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.698 17:03:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.698 17:03:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.698 17:03:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.698 17:03:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.698 17:03:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.698 17:03:22 -- paths/export.sh@5 -- # export PATH 00:16:06.698 17:03:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.698 17:03:22 -- nvmf/common.sh@47 -- # : 0 00:16:06.698 17:03:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:06.698 17:03:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:06.698 17:03:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.698 17:03:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.698 17:03:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.698 17:03:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:06.698 17:03:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:06.698 17:03:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:06.698 17:03:22 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.698 17:03:22 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.698 17:03:22 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:06.698 17:03:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:06.698 17:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:06.698 17:03:22 -- common/autotest_common.sh@10 -- # set +x 00:16:06.698 ************************************ 00:16:06.698 START TEST nvmf_shutdown_tc1 00:16:06.698 ************************************ 00:16:06.698 17:03:22 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:16:06.698 17:03:22 -- target/shutdown.sh@74 -- # starttarget 00:16:06.698 17:03:22 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:06.698 17:03:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:06.698 17:03:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.698 17:03:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:06.698 17:03:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:06.698 17:03:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:06.698 17:03:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.698 17:03:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.698 17:03:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.698 17:03:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:06.698 17:03:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:06.698 17:03:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:06.698 17:03:22 -- common/autotest_common.sh@10 -- # set +x 00:16:08.606 17:03:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:08.606 17:03:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:08.606 17:03:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:08.606 17:03:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:08.606 17:03:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:08.606 17:03:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:08.606 17:03:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:08.606 17:03:24 -- nvmf/common.sh@295 -- # net_devs=() 00:16:08.606 17:03:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:08.606 17:03:24 -- nvmf/common.sh@296 -- # e810=() 00:16:08.606 17:03:24 -- nvmf/common.sh@296 -- # local -ga e810 00:16:08.606 17:03:24 -- nvmf/common.sh@297 -- # x722=() 00:16:08.606 17:03:24 -- nvmf/common.sh@297 -- # local -ga x722 00:16:08.606 17:03:24 -- nvmf/common.sh@298 -- # mlx=() 00:16:08.606 17:03:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:08.606 17:03:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.606 17:03:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.606 17:03:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.606 17:03:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.606 17:03:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.606 17:03:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.606 17:03:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.606 17:03:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.606 17:03:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.606 17:03:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.606 17:03:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.606 17:03:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:08.606 17:03:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:08.606 17:03:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:08.606 17:03:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:08.606 17:03:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:08.607 17:03:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:08.607 17:03:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.607 17:03:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:08.607 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:08.607 17:03:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.607 17:03:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:08.607 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:08.607 17:03:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:08.607 17:03:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.607 17:03:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.607 17:03:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:08.607 17:03:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.607 17:03:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:08.607 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:08.607 17:03:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.607 17:03:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.607 17:03:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.607 17:03:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:08.607 17:03:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.607 17:03:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:08.607 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:08.607 17:03:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.607 17:03:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:08.607 17:03:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:08.607 17:03:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:08.607 17:03:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:08.607 17:03:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.607 17:03:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.607 17:03:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:08.607 17:03:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:08.607 17:03:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:08.607 17:03:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:08.607 17:03:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:08.607 17:03:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:08.607 17:03:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.607 17:03:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:08.607 17:03:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:08.607 17:03:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:08.607 17:03:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:08.607 17:03:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:08.607 17:03:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:08.607 17:03:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:08.607 17:03:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:08.866 17:03:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:08.866 17:03:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:08.866 17:03:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:08.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:16:08.866 00:16:08.866 --- 10.0.0.2 ping statistics --- 00:16:08.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.866 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:16:08.866 17:03:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:08.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:16:08.866 00:16:08.866 --- 10.0.0.1 ping statistics --- 00:16:08.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.866 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:16:08.866 17:03:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.866 17:03:24 -- nvmf/common.sh@411 -- # return 0 00:16:08.866 17:03:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:08.866 17:03:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.866 17:03:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:08.866 17:03:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:08.866 17:03:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.866 17:03:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:08.866 17:03:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:08.866 17:03:24 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:08.866 17:03:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:08.866 17:03:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:08.866 17:03:24 -- common/autotest_common.sh@10 -- # set +x 00:16:08.866 17:03:24 -- nvmf/common.sh@470 -- # nvmfpid=1708795 00:16:08.866 17:03:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:08.866 17:03:24 -- nvmf/common.sh@471 -- # waitforlisten 1708795 00:16:08.866 17:03:24 -- common/autotest_common.sh@817 -- # '[' -z 1708795 ']' 00:16:08.866 17:03:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.866 17:03:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:08.866 17:03:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.866 17:03:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:08.866 17:03:24 -- common/autotest_common.sh@10 -- # set +x 00:16:08.866 [2024-04-18 17:03:24.427453] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:08.866 [2024-04-18 17:03:24.427530] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.866 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.866 [2024-04-18 17:03:24.500145] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:09.125 [2024-04-18 17:03:24.617810] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.125 [2024-04-18 17:03:24.617886] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.125 [2024-04-18 17:03:24.617902] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.125 [2024-04-18 17:03:24.617916] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.125 [2024-04-18 17:03:24.617936] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.125 [2024-04-18 17:03:24.618025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.125 [2024-04-18 17:03:24.618143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.126 [2024-04-18 17:03:24.618212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.126 [2024-04-18 17:03:24.618209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:09.695 17:03:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:09.695 17:03:25 -- common/autotest_common.sh@850 -- # return 0 00:16:09.695 17:03:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:09.695 17:03:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:09.695 17:03:25 -- common/autotest_common.sh@10 -- # set +x 00:16:09.695 17:03:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.695 17:03:25 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:09.695 17:03:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.695 17:03:25 -- common/autotest_common.sh@10 -- # set +x 00:16:09.953 [2024-04-18 17:03:25.406327] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.953 17:03:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.953 17:03:25 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:09.953 17:03:25 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:09.953 17:03:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:09.953 17:03:25 -- common/autotest_common.sh@10 -- # set +x 00:16:09.953 17:03:25 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:09.953 17:03:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:09.953 17:03:25 -- target/shutdown.sh@28 -- # cat 00:16:09.953 17:03:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:09.953 17:03:25 -- target/shutdown.sh@28 -- # cat 00:16:09.953 17:03:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:09.953 17:03:25 -- target/shutdown.sh@28 -- # cat 00:16:09.953 17:03:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:09.953 17:03:25 -- target/shutdown.sh@28 -- # cat 00:16:09.953 17:03:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:09.953 17:03:25 -- target/shutdown.sh@28 -- # cat 00:16:09.953 17:03:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:09.953 17:03:25 -- target/shutdown.sh@28 -- # cat 00:16:09.953 17:03:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:09.953 17:03:25 -- target/shutdown.sh@28 -- # cat 00:16:09.953 17:03:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:09.953 17:03:25 -- target/shutdown.sh@28 -- # cat 00:16:09.953 17:03:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:09.953 17:03:25 -- target/shutdown.sh@28 -- # cat 00:16:09.953 17:03:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:09.953 17:03:25 -- target/shutdown.sh@28 -- # cat 00:16:09.953 17:03:25 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:09.953 17:03:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.953 17:03:25 -- common/autotest_common.sh@10 -- # set +x 00:16:09.953 Malloc1 00:16:09.953 [2024-04-18 17:03:25.481151] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.953 Malloc2 00:16:09.953 Malloc3 00:16:09.953 Malloc4 00:16:09.953 Malloc5 00:16:10.211 Malloc6 00:16:10.211 Malloc7 00:16:10.211 Malloc8 00:16:10.211 Malloc9 00:16:10.211 Malloc10 00:16:10.470 17:03:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:10.470 17:03:25 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:10.470 17:03:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:10.470 17:03:25 -- common/autotest_common.sh@10 -- # set +x 00:16:10.470 17:03:25 -- target/shutdown.sh@78 -- # perfpid=1709104 00:16:10.470 17:03:25 -- target/shutdown.sh@79 -- # waitforlisten 1709104 /var/tmp/bdevperf.sock 00:16:10.470 17:03:25 -- common/autotest_common.sh@817 -- # '[' -z 1709104 ']' 00:16:10.470 17:03:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.470 17:03:25 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:10.470 17:03:25 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:10.470 17:03:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:10.470 17:03:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.470 17:03:25 -- nvmf/common.sh@521 -- # config=() 00:16:10.470 17:03:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:10.470 17:03:25 -- nvmf/common.sh@521 -- # local subsystem config 00:16:10.470 17:03:25 -- common/autotest_common.sh@10 -- # set +x 00:16:10.470 17:03:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.470 17:03:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.470 { 00:16:10.470 "params": { 00:16:10.470 "name": "Nvme$subsystem", 00:16:10.470 "trtype": "$TEST_TRANSPORT", 00:16:10.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.470 "adrfam": "ipv4", 00:16:10.470 "trsvcid": "$NVMF_PORT", 00:16:10.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.470 "hdgst": ${hdgst:-false}, 00:16:10.470 "ddgst": ${ddgst:-false} 00:16:10.470 }, 00:16:10.470 "method": "bdev_nvme_attach_controller" 00:16:10.470 } 00:16:10.470 EOF 00:16:10.470 )") 00:16:10.470 17:03:25 -- nvmf/common.sh@543 -- # cat 00:16:10.470 17:03:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.470 17:03:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.470 { 00:16:10.470 "params": { 00:16:10.470 "name": "Nvme$subsystem", 00:16:10.470 "trtype": "$TEST_TRANSPORT", 00:16:10.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.470 "adrfam": "ipv4", 00:16:10.470 "trsvcid": "$NVMF_PORT", 00:16:10.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.470 "hdgst": ${hdgst:-false}, 00:16:10.470 "ddgst": ${ddgst:-false} 00:16:10.470 }, 00:16:10.470 "method": "bdev_nvme_attach_controller" 00:16:10.470 } 00:16:10.470 EOF 00:16:10.470 )") 00:16:10.470 17:03:25 -- nvmf/common.sh@543 -- # cat 00:16:10.470 17:03:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.470 17:03:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.470 { 00:16:10.470 "params": { 00:16:10.470 "name": "Nvme$subsystem", 00:16:10.470 "trtype": "$TEST_TRANSPORT", 00:16:10.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.470 "adrfam": "ipv4", 00:16:10.470 "trsvcid": "$NVMF_PORT", 00:16:10.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.470 "hdgst": ${hdgst:-false}, 00:16:10.470 "ddgst": ${ddgst:-false} 00:16:10.470 }, 00:16:10.470 "method": "bdev_nvme_attach_controller" 00:16:10.470 } 00:16:10.470 EOF 00:16:10.470 )") 00:16:10.470 17:03:25 -- nvmf/common.sh@543 -- # cat 00:16:10.470 17:03:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.470 17:03:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.471 { 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme$subsystem", 00:16:10.471 "trtype": "$TEST_TRANSPORT", 00:16:10.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "$NVMF_PORT", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.471 "hdgst": ${hdgst:-false}, 00:16:10.471 "ddgst": ${ddgst:-false} 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 } 00:16:10.471 EOF 00:16:10.471 )") 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # cat 00:16:10.471 17:03:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.471 { 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme$subsystem", 00:16:10.471 "trtype": "$TEST_TRANSPORT", 00:16:10.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "$NVMF_PORT", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.471 "hdgst": ${hdgst:-false}, 00:16:10.471 "ddgst": ${ddgst:-false} 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 } 00:16:10.471 EOF 00:16:10.471 )") 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # cat 00:16:10.471 17:03:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.471 { 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme$subsystem", 00:16:10.471 "trtype": "$TEST_TRANSPORT", 00:16:10.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "$NVMF_PORT", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.471 "hdgst": ${hdgst:-false}, 00:16:10.471 "ddgst": ${ddgst:-false} 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 } 00:16:10.471 EOF 00:16:10.471 )") 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # cat 00:16:10.471 17:03:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.471 { 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme$subsystem", 00:16:10.471 "trtype": "$TEST_TRANSPORT", 00:16:10.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "$NVMF_PORT", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.471 "hdgst": ${hdgst:-false}, 00:16:10.471 "ddgst": ${ddgst:-false} 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 } 00:16:10.471 EOF 00:16:10.471 )") 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # cat 00:16:10.471 17:03:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.471 { 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme$subsystem", 00:16:10.471 "trtype": "$TEST_TRANSPORT", 00:16:10.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "$NVMF_PORT", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.471 "hdgst": ${hdgst:-false}, 00:16:10.471 "ddgst": ${ddgst:-false} 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 } 00:16:10.471 EOF 00:16:10.471 )") 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # cat 00:16:10.471 17:03:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.471 { 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme$subsystem", 00:16:10.471 "trtype": "$TEST_TRANSPORT", 00:16:10.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "$NVMF_PORT", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.471 "hdgst": ${hdgst:-false}, 00:16:10.471 "ddgst": ${ddgst:-false} 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 } 00:16:10.471 EOF 00:16:10.471 )") 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # cat 00:16:10.471 17:03:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:10.471 { 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme$subsystem", 00:16:10.471 "trtype": "$TEST_TRANSPORT", 00:16:10.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "$NVMF_PORT", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.471 "hdgst": ${hdgst:-false}, 00:16:10.471 "ddgst": ${ddgst:-false} 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 } 00:16:10.471 EOF 00:16:10.471 )") 00:16:10.471 17:03:25 -- nvmf/common.sh@543 -- # cat 00:16:10.471 17:03:25 -- nvmf/common.sh@545 -- # jq . 00:16:10.471 17:03:25 -- nvmf/common.sh@546 -- # IFS=, 00:16:10.471 17:03:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme1", 00:16:10.471 "trtype": "tcp", 00:16:10.471 "traddr": "10.0.0.2", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "4420", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:10.471 "hdgst": false, 00:16:10.471 "ddgst": false 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 },{ 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme2", 00:16:10.471 "trtype": "tcp", 00:16:10.471 "traddr": "10.0.0.2", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "4420", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:10.471 "hdgst": false, 00:16:10.471 "ddgst": false 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 },{ 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme3", 00:16:10.471 "trtype": "tcp", 00:16:10.471 "traddr": "10.0.0.2", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "4420", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:10.471 "hdgst": false, 00:16:10.471 "ddgst": false 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 },{ 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme4", 00:16:10.471 "trtype": "tcp", 00:16:10.471 "traddr": "10.0.0.2", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "4420", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:10.471 "hdgst": false, 00:16:10.471 "ddgst": false 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 },{ 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme5", 00:16:10.471 "trtype": "tcp", 00:16:10.471 "traddr": "10.0.0.2", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "4420", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:10.471 "hdgst": false, 00:16:10.471 "ddgst": false 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 },{ 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme6", 00:16:10.471 "trtype": "tcp", 00:16:10.471 "traddr": "10.0.0.2", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "4420", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:10.471 "hdgst": false, 00:16:10.471 "ddgst": false 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 },{ 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme7", 00:16:10.471 "trtype": "tcp", 00:16:10.471 "traddr": "10.0.0.2", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "4420", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:10.471 "hdgst": false, 00:16:10.471 "ddgst": false 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 },{ 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme8", 00:16:10.471 "trtype": "tcp", 00:16:10.471 "traddr": "10.0.0.2", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "4420", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:10.471 "hdgst": false, 00:16:10.471 "ddgst": false 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 },{ 00:16:10.471 "params": { 00:16:10.471 "name": "Nvme9", 00:16:10.471 "trtype": "tcp", 00:16:10.471 "traddr": "10.0.0.2", 00:16:10.471 "adrfam": "ipv4", 00:16:10.471 "trsvcid": "4420", 00:16:10.471 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:10.471 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:10.471 "hdgst": false, 00:16:10.471 "ddgst": false 00:16:10.471 }, 00:16:10.471 "method": "bdev_nvme_attach_controller" 00:16:10.471 },{ 00:16:10.471 "params": { 00:16:10.472 "name": "Nvme10", 00:16:10.472 "trtype": "tcp", 00:16:10.472 "traddr": "10.0.0.2", 00:16:10.472 "adrfam": "ipv4", 00:16:10.472 "trsvcid": "4420", 00:16:10.472 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:10.472 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:10.472 "hdgst": false, 00:16:10.472 "ddgst": false 00:16:10.472 }, 00:16:10.472 "method": "bdev_nvme_attach_controller" 00:16:10.472 }' 00:16:10.472 [2024-04-18 17:03:25.986421] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:10.472 [2024-04-18 17:03:25.986501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:10.472 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.472 [2024-04-18 17:03:26.049686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.472 [2024-04-18 17:03:26.157390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.844 17:03:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:11.844 17:03:27 -- common/autotest_common.sh@850 -- # return 0 00:16:11.844 17:03:27 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:11.844 17:03:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.844 17:03:27 -- common/autotest_common.sh@10 -- # set +x 00:16:11.844 17:03:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.844 17:03:27 -- target/shutdown.sh@83 -- # kill -9 1709104 00:16:11.844 17:03:27 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:16:11.844 17:03:27 -- target/shutdown.sh@87 -- # sleep 1 00:16:13.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1709104 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:13.218 17:03:28 -- target/shutdown.sh@88 -- # kill -0 1708795 00:16:13.218 17:03:28 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:13.218 17:03:28 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:13.218 17:03:28 -- nvmf/common.sh@521 -- # config=() 00:16:13.218 17:03:28 -- nvmf/common.sh@521 -- # local subsystem config 00:16:13.218 17:03:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.218 { 00:16:13.218 "params": { 00:16:13.218 "name": "Nvme$subsystem", 00:16:13.218 "trtype": "$TEST_TRANSPORT", 00:16:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.218 "adrfam": "ipv4", 00:16:13.218 "trsvcid": "$NVMF_PORT", 00:16:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.218 "hdgst": ${hdgst:-false}, 00:16:13.218 "ddgst": ${ddgst:-false} 00:16:13.218 }, 00:16:13.218 "method": "bdev_nvme_attach_controller" 00:16:13.218 } 00:16:13.218 EOF 00:16:13.218 )") 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # cat 00:16:13.218 17:03:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.218 { 00:16:13.218 "params": { 00:16:13.218 "name": "Nvme$subsystem", 00:16:13.218 "trtype": "$TEST_TRANSPORT", 00:16:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.218 "adrfam": "ipv4", 00:16:13.218 "trsvcid": "$NVMF_PORT", 00:16:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.218 "hdgst": ${hdgst:-false}, 00:16:13.218 "ddgst": ${ddgst:-false} 00:16:13.218 }, 00:16:13.218 "method": "bdev_nvme_attach_controller" 00:16:13.218 } 00:16:13.218 EOF 00:16:13.218 )") 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # cat 00:16:13.218 17:03:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.218 { 00:16:13.218 "params": { 00:16:13.218 "name": "Nvme$subsystem", 00:16:13.218 "trtype": "$TEST_TRANSPORT", 00:16:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.218 "adrfam": "ipv4", 00:16:13.218 "trsvcid": "$NVMF_PORT", 00:16:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.218 "hdgst": ${hdgst:-false}, 00:16:13.218 "ddgst": ${ddgst:-false} 00:16:13.218 }, 00:16:13.218 "method": "bdev_nvme_attach_controller" 00:16:13.218 } 00:16:13.218 EOF 00:16:13.218 )") 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # cat 00:16:13.218 17:03:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.218 { 00:16:13.218 "params": { 00:16:13.218 "name": "Nvme$subsystem", 00:16:13.218 "trtype": "$TEST_TRANSPORT", 00:16:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.218 "adrfam": "ipv4", 00:16:13.218 "trsvcid": "$NVMF_PORT", 00:16:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.218 "hdgst": ${hdgst:-false}, 00:16:13.218 "ddgst": ${ddgst:-false} 00:16:13.218 }, 00:16:13.218 "method": "bdev_nvme_attach_controller" 00:16:13.218 } 00:16:13.218 EOF 00:16:13.218 )") 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # cat 00:16:13.218 17:03:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.218 { 00:16:13.218 "params": { 00:16:13.218 "name": "Nvme$subsystem", 00:16:13.218 "trtype": "$TEST_TRANSPORT", 00:16:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.218 "adrfam": "ipv4", 00:16:13.218 "trsvcid": "$NVMF_PORT", 00:16:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.218 "hdgst": ${hdgst:-false}, 00:16:13.218 "ddgst": ${ddgst:-false} 00:16:13.218 }, 00:16:13.218 "method": "bdev_nvme_attach_controller" 00:16:13.218 } 00:16:13.218 EOF 00:16:13.218 )") 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # cat 00:16:13.218 17:03:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.218 { 00:16:13.218 "params": { 00:16:13.218 "name": "Nvme$subsystem", 00:16:13.218 "trtype": "$TEST_TRANSPORT", 00:16:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.218 "adrfam": "ipv4", 00:16:13.218 "trsvcid": "$NVMF_PORT", 00:16:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.218 "hdgst": ${hdgst:-false}, 00:16:13.218 "ddgst": ${ddgst:-false} 00:16:13.218 }, 00:16:13.218 "method": "bdev_nvme_attach_controller" 00:16:13.218 } 00:16:13.218 EOF 00:16:13.218 )") 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # cat 00:16:13.218 17:03:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.218 { 00:16:13.218 "params": { 00:16:13.218 "name": "Nvme$subsystem", 00:16:13.218 "trtype": "$TEST_TRANSPORT", 00:16:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.218 "adrfam": "ipv4", 00:16:13.218 "trsvcid": "$NVMF_PORT", 00:16:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.218 "hdgst": ${hdgst:-false}, 00:16:13.218 "ddgst": ${ddgst:-false} 00:16:13.218 }, 00:16:13.218 "method": "bdev_nvme_attach_controller" 00:16:13.218 } 00:16:13.218 EOF 00:16:13.218 )") 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # cat 00:16:13.218 17:03:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.218 { 00:16:13.218 "params": { 00:16:13.218 "name": "Nvme$subsystem", 00:16:13.218 "trtype": "$TEST_TRANSPORT", 00:16:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.218 "adrfam": "ipv4", 00:16:13.218 "trsvcid": "$NVMF_PORT", 00:16:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.218 "hdgst": ${hdgst:-false}, 00:16:13.218 "ddgst": ${ddgst:-false} 00:16:13.218 }, 00:16:13.218 "method": "bdev_nvme_attach_controller" 00:16:13.218 } 00:16:13.218 EOF 00:16:13.218 )") 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # cat 00:16:13.218 17:03:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.218 { 00:16:13.218 "params": { 00:16:13.218 "name": "Nvme$subsystem", 00:16:13.218 "trtype": "$TEST_TRANSPORT", 00:16:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.218 "adrfam": "ipv4", 00:16:13.218 "trsvcid": "$NVMF_PORT", 00:16:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.218 "hdgst": ${hdgst:-false}, 00:16:13.218 "ddgst": ${ddgst:-false} 00:16:13.218 }, 00:16:13.218 "method": "bdev_nvme_attach_controller" 00:16:13.218 } 00:16:13.218 EOF 00:16:13.218 )") 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # cat 00:16:13.218 17:03:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.218 { 00:16:13.218 "params": { 00:16:13.218 "name": "Nvme$subsystem", 00:16:13.218 "trtype": "$TEST_TRANSPORT", 00:16:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.218 "adrfam": "ipv4", 00:16:13.218 "trsvcid": "$NVMF_PORT", 00:16:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.218 "hdgst": ${hdgst:-false}, 00:16:13.218 "ddgst": ${ddgst:-false} 00:16:13.218 }, 00:16:13.218 "method": "bdev_nvme_attach_controller" 00:16:13.218 } 00:16:13.218 EOF 00:16:13.218 )") 00:16:13.218 17:03:28 -- nvmf/common.sh@543 -- # cat 00:16:13.218 17:03:28 -- nvmf/common.sh@545 -- # jq . 00:16:13.218 17:03:28 -- nvmf/common.sh@546 -- # IFS=, 00:16:13.218 17:03:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:13.218 "params": { 00:16:13.218 "name": "Nvme1", 00:16:13.218 "trtype": "tcp", 00:16:13.218 "traddr": "10.0.0.2", 00:16:13.218 "adrfam": "ipv4", 00:16:13.218 "trsvcid": "4420", 00:16:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:13.219 "hdgst": false, 00:16:13.219 "ddgst": false 00:16:13.219 }, 00:16:13.219 "method": "bdev_nvme_attach_controller" 00:16:13.219 },{ 00:16:13.219 "params": { 00:16:13.219 "name": "Nvme2", 00:16:13.219 "trtype": "tcp", 00:16:13.219 "traddr": "10.0.0.2", 00:16:13.219 "adrfam": "ipv4", 00:16:13.219 "trsvcid": "4420", 00:16:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:13.219 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:13.219 "hdgst": false, 00:16:13.219 "ddgst": false 00:16:13.219 }, 00:16:13.219 "method": "bdev_nvme_attach_controller" 00:16:13.219 },{ 00:16:13.219 "params": { 00:16:13.219 "name": "Nvme3", 00:16:13.219 "trtype": "tcp", 00:16:13.219 "traddr": "10.0.0.2", 00:16:13.219 "adrfam": "ipv4", 00:16:13.219 "trsvcid": "4420", 00:16:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:13.219 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:13.219 "hdgst": false, 00:16:13.219 "ddgst": false 00:16:13.219 }, 00:16:13.219 "method": "bdev_nvme_attach_controller" 00:16:13.219 },{ 00:16:13.219 "params": { 00:16:13.219 "name": "Nvme4", 00:16:13.219 "trtype": "tcp", 00:16:13.219 "traddr": "10.0.0.2", 00:16:13.219 "adrfam": "ipv4", 00:16:13.219 "trsvcid": "4420", 00:16:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:13.219 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:13.219 "hdgst": false, 00:16:13.219 "ddgst": false 00:16:13.219 }, 00:16:13.219 "method": "bdev_nvme_attach_controller" 00:16:13.219 },{ 00:16:13.219 "params": { 00:16:13.219 "name": "Nvme5", 00:16:13.219 "trtype": "tcp", 00:16:13.219 "traddr": "10.0.0.2", 00:16:13.219 "adrfam": "ipv4", 00:16:13.219 "trsvcid": "4420", 00:16:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:13.219 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:13.219 "hdgst": false, 00:16:13.219 "ddgst": false 00:16:13.219 }, 00:16:13.219 "method": "bdev_nvme_attach_controller" 00:16:13.219 },{ 00:16:13.219 "params": { 00:16:13.219 "name": "Nvme6", 00:16:13.219 "trtype": "tcp", 00:16:13.219 "traddr": "10.0.0.2", 00:16:13.219 "adrfam": "ipv4", 00:16:13.219 "trsvcid": "4420", 00:16:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:13.219 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:13.219 "hdgst": false, 00:16:13.219 "ddgst": false 00:16:13.219 }, 00:16:13.219 "method": "bdev_nvme_attach_controller" 00:16:13.219 },{ 00:16:13.219 "params": { 00:16:13.219 "name": "Nvme7", 00:16:13.219 "trtype": "tcp", 00:16:13.219 "traddr": "10.0.0.2", 00:16:13.219 "adrfam": "ipv4", 00:16:13.219 "trsvcid": "4420", 00:16:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:13.219 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:13.219 "hdgst": false, 00:16:13.219 "ddgst": false 00:16:13.219 }, 00:16:13.219 "method": "bdev_nvme_attach_controller" 00:16:13.219 },{ 00:16:13.219 "params": { 00:16:13.219 "name": "Nvme8", 00:16:13.219 "trtype": "tcp", 00:16:13.219 "traddr": "10.0.0.2", 00:16:13.219 "adrfam": "ipv4", 00:16:13.219 "trsvcid": "4420", 00:16:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:13.219 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:13.219 "hdgst": false, 00:16:13.219 "ddgst": false 00:16:13.219 }, 00:16:13.219 "method": "bdev_nvme_attach_controller" 00:16:13.219 },{ 00:16:13.219 "params": { 00:16:13.219 "name": "Nvme9", 00:16:13.219 "trtype": "tcp", 00:16:13.219 "traddr": "10.0.0.2", 00:16:13.219 "adrfam": "ipv4", 00:16:13.219 "trsvcid": "4420", 00:16:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:13.219 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:13.219 "hdgst": false, 00:16:13.219 "ddgst": false 00:16:13.219 }, 00:16:13.219 "method": "bdev_nvme_attach_controller" 00:16:13.219 },{ 00:16:13.219 "params": { 00:16:13.219 "name": "Nvme10", 00:16:13.219 "trtype": "tcp", 00:16:13.219 "traddr": "10.0.0.2", 00:16:13.219 "adrfam": "ipv4", 00:16:13.219 "trsvcid": "4420", 00:16:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:13.219 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:13.219 "hdgst": false, 00:16:13.219 "ddgst": false 00:16:13.219 }, 00:16:13.219 "method": "bdev_nvme_attach_controller" 00:16:13.219 }' 00:16:13.219 [2024-04-18 17:03:28.531137] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:13.219 [2024-04-18 17:03:28.531215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709399 ] 00:16:13.219 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.219 [2024-04-18 17:03:28.594947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.219 [2024-04-18 17:03:28.702231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.593 Running I/O for 1 seconds... 00:16:15.528 00:16:15.528 Latency(us) 00:16:15.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.528 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:15.528 Verification LBA range: start 0x0 length 0x400 00:16:15.528 Nvme1n1 : 1.12 231.49 14.47 0.00 0.00 271874.32 6893.42 246997.90 00:16:15.528 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:15.528 Verification LBA range: start 0x0 length 0x400 00:16:15.528 Nvme2n1 : 1.12 228.75 14.30 0.00 0.00 272502.52 21651.15 248551.35 00:16:15.528 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:15.528 Verification LBA range: start 0x0 length 0x400 00:16:15.528 Nvme3n1 : 1.09 259.74 16.23 0.00 0.00 231710.29 17476.27 254765.13 00:16:15.528 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:15.528 Verification LBA range: start 0x0 length 0x400 00:16:15.528 Nvme4n1 : 1.07 243.42 15.21 0.00 0.00 245726.29 6092.42 253211.69 00:16:15.528 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:15.528 Verification LBA range: start 0x0 length 0x400 00:16:15.528 Nvme5n1 : 1.09 235.66 14.73 0.00 0.00 250645.62 19515.16 253211.69 00:16:15.528 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:15.528 Verification LBA range: start 0x0 length 0x400 00:16:15.528 Nvme6n1 : 1.18 217.31 13.58 0.00 0.00 269115.54 20680.25 268746.15 00:16:15.528 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:15.528 Verification LBA range: start 0x0 length 0x400 00:16:15.528 Nvme7n1 : 1.13 226.90 14.18 0.00 0.00 252182.57 23301.69 246997.90 00:16:15.528 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:15.528 Verification LBA range: start 0x0 length 0x400 00:16:15.528 Nvme8n1 : 1.19 269.58 16.85 0.00 0.00 209843.24 15146.10 243891.01 00:16:15.528 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:15.528 Verification LBA range: start 0x0 length 0x400 00:16:15.528 Nvme9n1 : 1.19 268.05 16.75 0.00 0.00 207668.57 13786.83 251658.24 00:16:15.528 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:15.528 Verification LBA range: start 0x0 length 0x400 00:16:15.528 Nvme10n1 : 1.20 261.94 16.37 0.00 0.00 208443.62 6456.51 278066.82 00:16:15.528 =================================================================================================================== 00:16:15.528 Total : 2442.84 152.68 0.00 0.00 239682.95 6092.42 278066.82 00:16:15.786 17:03:31 -- target/shutdown.sh@94 -- # stoptarget 00:16:15.786 17:03:31 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:15.786 17:03:31 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:15.786 17:03:31 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:15.786 17:03:31 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:15.786 17:03:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:15.786 17:03:31 -- nvmf/common.sh@117 -- # sync 00:16:15.786 17:03:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:15.786 17:03:31 -- nvmf/common.sh@120 -- # set +e 00:16:15.786 17:03:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:15.786 17:03:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:15.786 rmmod nvme_tcp 00:16:15.786 rmmod nvme_fabrics 00:16:15.786 rmmod nvme_keyring 00:16:15.786 17:03:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:15.786 17:03:31 -- nvmf/common.sh@124 -- # set -e 00:16:15.786 17:03:31 -- nvmf/common.sh@125 -- # return 0 00:16:15.786 17:03:31 -- nvmf/common.sh@478 -- # '[' -n 1708795 ']' 00:16:15.786 17:03:31 -- nvmf/common.sh@479 -- # killprocess 1708795 00:16:15.786 17:03:31 -- common/autotest_common.sh@936 -- # '[' -z 1708795 ']' 00:16:15.786 17:03:31 -- common/autotest_common.sh@940 -- # kill -0 1708795 00:16:15.786 17:03:31 -- common/autotest_common.sh@941 -- # uname 00:16:15.786 17:03:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:15.786 17:03:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1708795 00:16:16.044 17:03:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:16.044 17:03:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:16.044 17:03:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1708795' 00:16:16.044 killing process with pid 1708795 00:16:16.044 17:03:31 -- common/autotest_common.sh@955 -- # kill 1708795 00:16:16.044 17:03:31 -- common/autotest_common.sh@960 -- # wait 1708795 00:16:16.611 17:03:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:16.611 17:03:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:16.611 17:03:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:16.611 17:03:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.611 17:03:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.611 17:03:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.611 17:03:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.611 17:03:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.516 17:03:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:18.516 00:16:18.516 real 0m11.735s 00:16:18.516 user 0m33.584s 00:16:18.516 sys 0m3.160s 00:16:18.516 17:03:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:18.516 17:03:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.516 ************************************ 00:16:18.516 END TEST nvmf_shutdown_tc1 00:16:18.516 ************************************ 00:16:18.516 17:03:34 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:16:18.516 17:03:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:18.516 17:03:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:18.516 17:03:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.516 ************************************ 00:16:18.516 START TEST nvmf_shutdown_tc2 00:16:18.516 ************************************ 00:16:18.516 17:03:34 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:16:18.516 17:03:34 -- target/shutdown.sh@99 -- # starttarget 00:16:18.516 17:03:34 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:18.516 17:03:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:18.516 17:03:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.516 17:03:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:18.516 17:03:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:18.516 17:03:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:18.516 17:03:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.516 17:03:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.516 17:03:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.516 17:03:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:18.516 17:03:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:18.516 17:03:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.516 17:03:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:18.516 17:03:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:18.516 17:03:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:18.516 17:03:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:18.516 17:03:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:18.516 17:03:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:18.516 17:03:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:18.516 17:03:34 -- nvmf/common.sh@295 -- # net_devs=() 00:16:18.516 17:03:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:18.516 17:03:34 -- nvmf/common.sh@296 -- # e810=() 00:16:18.516 17:03:34 -- nvmf/common.sh@296 -- # local -ga e810 00:16:18.516 17:03:34 -- nvmf/common.sh@297 -- # x722=() 00:16:18.516 17:03:34 -- nvmf/common.sh@297 -- # local -ga x722 00:16:18.516 17:03:34 -- nvmf/common.sh@298 -- # mlx=() 00:16:18.516 17:03:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:18.516 17:03:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.516 17:03:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.516 17:03:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.516 17:03:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.516 17:03:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.516 17:03:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.516 17:03:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.516 17:03:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.516 17:03:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.516 17:03:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.516 17:03:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.516 17:03:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:18.516 17:03:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:18.516 17:03:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:18.516 17:03:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.516 17:03:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:18.516 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:18.516 17:03:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.516 17:03:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:18.516 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:18.516 17:03:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:18.516 17:03:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.516 17:03:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.516 17:03:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:18.516 17:03:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.516 17:03:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:18.516 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:18.516 17:03:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.516 17:03:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.516 17:03:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.516 17:03:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:18.516 17:03:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.516 17:03:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:18.516 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:18.516 17:03:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.516 17:03:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:18.516 17:03:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:18.516 17:03:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:18.516 17:03:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:18.516 17:03:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.516 17:03:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.516 17:03:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.516 17:03:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:18.516 17:03:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.516 17:03:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.516 17:03:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:18.516 17:03:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.516 17:03:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.516 17:03:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:18.516 17:03:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:18.516 17:03:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.516 17:03:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.774 17:03:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.774 17:03:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.774 17:03:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:18.774 17:03:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.774 17:03:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.774 17:03:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.774 17:03:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:18.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:16:18.774 00:16:18.774 --- 10.0.0.2 ping statistics --- 00:16:18.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.774 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:16:18.774 17:03:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:16:18.774 00:16:18.774 --- 10.0.0.1 ping statistics --- 00:16:18.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.774 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:16:18.774 17:03:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.774 17:03:34 -- nvmf/common.sh@411 -- # return 0 00:16:18.774 17:03:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:18.774 17:03:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.774 17:03:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:18.774 17:03:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:18.774 17:03:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.774 17:03:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:18.774 17:03:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:18.774 17:03:34 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:18.774 17:03:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:18.774 17:03:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:18.774 17:03:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.774 17:03:34 -- nvmf/common.sh@470 -- # nvmfpid=1710173 00:16:18.774 17:03:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:18.774 17:03:34 -- nvmf/common.sh@471 -- # waitforlisten 1710173 00:16:18.774 17:03:34 -- common/autotest_common.sh@817 -- # '[' -z 1710173 ']' 00:16:18.774 17:03:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.774 17:03:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:18.774 17:03:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.774 17:03:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:18.774 17:03:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.774 [2024-04-18 17:03:34.406565] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:18.774 [2024-04-18 17:03:34.406637] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.774 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.774 [2024-04-18 17:03:34.475111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.032 [2024-04-18 17:03:34.591624] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.032 [2024-04-18 17:03:34.591676] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.032 [2024-04-18 17:03:34.591692] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.032 [2024-04-18 17:03:34.591713] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.032 [2024-04-18 17:03:34.591725] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.032 [2024-04-18 17:03:34.591822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.032 [2024-04-18 17:03:34.591937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.032 [2024-04-18 17:03:34.592008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.032 [2024-04-18 17:03:34.592004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:19.965 17:03:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:19.965 17:03:35 -- common/autotest_common.sh@850 -- # return 0 00:16:19.965 17:03:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:19.965 17:03:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:19.965 17:03:35 -- common/autotest_common.sh@10 -- # set +x 00:16:19.965 17:03:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.965 17:03:35 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:19.965 17:03:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.965 17:03:35 -- common/autotest_common.sh@10 -- # set +x 00:16:19.965 [2024-04-18 17:03:35.356981] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.965 17:03:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.965 17:03:35 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:19.965 17:03:35 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:19.965 17:03:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:19.965 17:03:35 -- common/autotest_common.sh@10 -- # set +x 00:16:19.965 17:03:35 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:19.965 17:03:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:19.965 17:03:35 -- target/shutdown.sh@28 -- # cat 00:16:19.965 17:03:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:19.965 17:03:35 -- target/shutdown.sh@28 -- # cat 00:16:19.965 17:03:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:19.965 17:03:35 -- target/shutdown.sh@28 -- # cat 00:16:19.965 17:03:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:19.965 17:03:35 -- target/shutdown.sh@28 -- # cat 00:16:19.965 17:03:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:19.965 17:03:35 -- target/shutdown.sh@28 -- # cat 00:16:19.965 17:03:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:19.965 17:03:35 -- target/shutdown.sh@28 -- # cat 00:16:19.965 17:03:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:19.965 17:03:35 -- target/shutdown.sh@28 -- # cat 00:16:19.965 17:03:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:19.965 17:03:35 -- target/shutdown.sh@28 -- # cat 00:16:19.965 17:03:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:19.965 17:03:35 -- target/shutdown.sh@28 -- # cat 00:16:19.965 17:03:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:19.965 17:03:35 -- target/shutdown.sh@28 -- # cat 00:16:19.965 17:03:35 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:19.965 17:03:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.965 17:03:35 -- common/autotest_common.sh@10 -- # set +x 00:16:19.965 Malloc1 00:16:19.965 [2024-04-18 17:03:35.442185] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.965 Malloc2 00:16:19.965 Malloc3 00:16:19.965 Malloc4 00:16:19.965 Malloc5 00:16:19.965 Malloc6 00:16:20.223 Malloc7 00:16:20.223 Malloc8 00:16:20.223 Malloc9 00:16:20.223 Malloc10 00:16:20.223 17:03:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.223 17:03:35 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:20.223 17:03:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:20.223 17:03:35 -- common/autotest_common.sh@10 -- # set +x 00:16:20.223 17:03:35 -- target/shutdown.sh@103 -- # perfpid=1710479 00:16:20.223 17:03:35 -- target/shutdown.sh@104 -- # waitforlisten 1710479 /var/tmp/bdevperf.sock 00:16:20.223 17:03:35 -- common/autotest_common.sh@817 -- # '[' -z 1710479 ']' 00:16:20.223 17:03:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:20.223 17:03:35 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:20.223 17:03:35 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:20.223 17:03:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:20.223 17:03:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:20.223 17:03:35 -- nvmf/common.sh@521 -- # config=() 00:16:20.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:20.223 17:03:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:20.223 17:03:35 -- nvmf/common.sh@521 -- # local subsystem config 00:16:20.223 17:03:35 -- common/autotest_common.sh@10 -- # set +x 00:16:20.223 17:03:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:20.223 17:03:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:20.223 { 00:16:20.223 "params": { 00:16:20.223 "name": "Nvme$subsystem", 00:16:20.223 "trtype": "$TEST_TRANSPORT", 00:16:20.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.223 "adrfam": "ipv4", 00:16:20.223 "trsvcid": "$NVMF_PORT", 00:16:20.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.223 "hdgst": ${hdgst:-false}, 00:16:20.223 "ddgst": ${ddgst:-false} 00:16:20.223 }, 00:16:20.223 "method": "bdev_nvme_attach_controller" 00:16:20.223 } 00:16:20.223 EOF 00:16:20.223 )") 00:16:20.223 17:03:35 -- nvmf/common.sh@543 -- # cat 00:16:20.223 17:03:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:20.223 17:03:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:20.223 { 00:16:20.223 "params": { 00:16:20.223 "name": "Nvme$subsystem", 00:16:20.223 "trtype": "$TEST_TRANSPORT", 00:16:20.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.223 "adrfam": "ipv4", 00:16:20.223 "trsvcid": "$NVMF_PORT", 00:16:20.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.223 "hdgst": ${hdgst:-false}, 00:16:20.223 "ddgst": ${ddgst:-false} 00:16:20.223 }, 00:16:20.223 "method": "bdev_nvme_attach_controller" 00:16:20.223 } 00:16:20.223 EOF 00:16:20.223 )") 00:16:20.223 17:03:35 -- nvmf/common.sh@543 -- # cat 00:16:20.223 17:03:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:20.223 17:03:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:20.223 { 00:16:20.223 "params": { 00:16:20.223 "name": "Nvme$subsystem", 00:16:20.223 "trtype": "$TEST_TRANSPORT", 00:16:20.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.223 "adrfam": "ipv4", 00:16:20.223 "trsvcid": "$NVMF_PORT", 00:16:20.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.223 "hdgst": ${hdgst:-false}, 00:16:20.223 "ddgst": ${ddgst:-false} 00:16:20.223 }, 00:16:20.223 "method": "bdev_nvme_attach_controller" 00:16:20.223 } 00:16:20.223 EOF 00:16:20.223 )") 00:16:20.223 17:03:35 -- nvmf/common.sh@543 -- # cat 00:16:20.223 17:03:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:20.223 17:03:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:20.223 { 00:16:20.223 "params": { 00:16:20.223 "name": "Nvme$subsystem", 00:16:20.223 "trtype": "$TEST_TRANSPORT", 00:16:20.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.223 "adrfam": "ipv4", 00:16:20.223 "trsvcid": "$NVMF_PORT", 00:16:20.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.223 "hdgst": ${hdgst:-false}, 00:16:20.223 "ddgst": ${ddgst:-false} 00:16:20.223 }, 00:16:20.223 "method": "bdev_nvme_attach_controller" 00:16:20.223 } 00:16:20.223 EOF 00:16:20.223 )") 00:16:20.223 17:03:35 -- nvmf/common.sh@543 -- # cat 00:16:20.223 17:03:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:20.223 17:03:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:20.223 { 00:16:20.223 "params": { 00:16:20.223 "name": "Nvme$subsystem", 00:16:20.223 "trtype": "$TEST_TRANSPORT", 00:16:20.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.223 "adrfam": "ipv4", 00:16:20.223 "trsvcid": "$NVMF_PORT", 00:16:20.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.223 "hdgst": ${hdgst:-false}, 00:16:20.223 "ddgst": ${ddgst:-false} 00:16:20.223 }, 00:16:20.223 "method": "bdev_nvme_attach_controller" 00:16:20.223 } 00:16:20.223 EOF 00:16:20.223 )") 00:16:20.223 17:03:35 -- nvmf/common.sh@543 -- # cat 00:16:20.484 17:03:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:20.484 17:03:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:20.484 { 00:16:20.484 "params": { 00:16:20.484 "name": "Nvme$subsystem", 00:16:20.484 "trtype": "$TEST_TRANSPORT", 00:16:20.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.484 "adrfam": "ipv4", 00:16:20.484 "trsvcid": "$NVMF_PORT", 00:16:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.484 "hdgst": ${hdgst:-false}, 00:16:20.484 "ddgst": ${ddgst:-false} 00:16:20.484 }, 00:16:20.484 "method": "bdev_nvme_attach_controller" 00:16:20.484 } 00:16:20.484 EOF 00:16:20.484 )") 00:16:20.484 17:03:35 -- nvmf/common.sh@543 -- # cat 00:16:20.484 17:03:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:20.484 17:03:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:20.484 { 00:16:20.484 "params": { 00:16:20.484 "name": "Nvme$subsystem", 00:16:20.484 "trtype": "$TEST_TRANSPORT", 00:16:20.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.484 "adrfam": "ipv4", 00:16:20.484 "trsvcid": "$NVMF_PORT", 00:16:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.484 "hdgst": ${hdgst:-false}, 00:16:20.484 "ddgst": ${ddgst:-false} 00:16:20.484 }, 00:16:20.484 "method": "bdev_nvme_attach_controller" 00:16:20.484 } 00:16:20.484 EOF 00:16:20.484 )") 00:16:20.484 17:03:35 -- nvmf/common.sh@543 -- # cat 00:16:20.484 17:03:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:20.484 17:03:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:20.484 { 00:16:20.484 "params": { 00:16:20.484 "name": "Nvme$subsystem", 00:16:20.484 "trtype": "$TEST_TRANSPORT", 00:16:20.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.484 "adrfam": "ipv4", 00:16:20.484 "trsvcid": "$NVMF_PORT", 00:16:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.484 "hdgst": ${hdgst:-false}, 00:16:20.484 "ddgst": ${ddgst:-false} 00:16:20.484 }, 00:16:20.484 "method": "bdev_nvme_attach_controller" 00:16:20.484 } 00:16:20.484 EOF 00:16:20.484 )") 00:16:20.484 17:03:35 -- nvmf/common.sh@543 -- # cat 00:16:20.484 17:03:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:20.484 17:03:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:20.484 { 00:16:20.484 "params": { 00:16:20.484 "name": "Nvme$subsystem", 00:16:20.484 "trtype": "$TEST_TRANSPORT", 00:16:20.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.484 "adrfam": "ipv4", 00:16:20.484 "trsvcid": "$NVMF_PORT", 00:16:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.484 "hdgst": ${hdgst:-false}, 00:16:20.484 "ddgst": ${ddgst:-false} 00:16:20.484 }, 00:16:20.484 "method": "bdev_nvme_attach_controller" 00:16:20.484 } 00:16:20.484 EOF 00:16:20.484 )") 00:16:20.484 17:03:35 -- nvmf/common.sh@543 -- # cat 00:16:20.484 17:03:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:20.484 17:03:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:20.484 { 00:16:20.484 "params": { 00:16:20.484 "name": "Nvme$subsystem", 00:16:20.484 "trtype": "$TEST_TRANSPORT", 00:16:20.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.484 "adrfam": "ipv4", 00:16:20.484 "trsvcid": "$NVMF_PORT", 00:16:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.484 "hdgst": ${hdgst:-false}, 00:16:20.484 "ddgst": ${ddgst:-false} 00:16:20.484 }, 00:16:20.484 "method": "bdev_nvme_attach_controller" 00:16:20.484 } 00:16:20.484 EOF 00:16:20.484 )") 00:16:20.484 17:03:35 -- nvmf/common.sh@543 -- # cat 00:16:20.484 17:03:35 -- nvmf/common.sh@545 -- # jq . 00:16:20.484 17:03:35 -- nvmf/common.sh@546 -- # IFS=, 00:16:20.484 17:03:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:20.484 "params": { 00:16:20.484 "name": "Nvme1", 00:16:20.484 "trtype": "tcp", 00:16:20.484 "traddr": "10.0.0.2", 00:16:20.484 "adrfam": "ipv4", 00:16:20.484 "trsvcid": "4420", 00:16:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:20.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:20.484 "hdgst": false, 00:16:20.484 "ddgst": false 00:16:20.484 }, 00:16:20.484 "method": "bdev_nvme_attach_controller" 00:16:20.484 },{ 00:16:20.484 "params": { 00:16:20.484 "name": "Nvme2", 00:16:20.484 "trtype": "tcp", 00:16:20.484 "traddr": "10.0.0.2", 00:16:20.484 "adrfam": "ipv4", 00:16:20.484 "trsvcid": "4420", 00:16:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:20.484 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:20.484 "hdgst": false, 00:16:20.484 "ddgst": false 00:16:20.484 }, 00:16:20.484 "method": "bdev_nvme_attach_controller" 00:16:20.484 },{ 00:16:20.484 "params": { 00:16:20.484 "name": "Nvme3", 00:16:20.484 "trtype": "tcp", 00:16:20.484 "traddr": "10.0.0.2", 00:16:20.484 "adrfam": "ipv4", 00:16:20.484 "trsvcid": "4420", 00:16:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:20.484 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:20.484 "hdgst": false, 00:16:20.484 "ddgst": false 00:16:20.484 }, 00:16:20.484 "method": "bdev_nvme_attach_controller" 00:16:20.484 },{ 00:16:20.484 "params": { 00:16:20.484 "name": "Nvme4", 00:16:20.484 "trtype": "tcp", 00:16:20.484 "traddr": "10.0.0.2", 00:16:20.484 "adrfam": "ipv4", 00:16:20.484 "trsvcid": "4420", 00:16:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:20.484 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:20.484 "hdgst": false, 00:16:20.484 "ddgst": false 00:16:20.484 }, 00:16:20.484 "method": "bdev_nvme_attach_controller" 00:16:20.484 },{ 00:16:20.484 "params": { 00:16:20.484 "name": "Nvme5", 00:16:20.484 "trtype": "tcp", 00:16:20.484 "traddr": "10.0.0.2", 00:16:20.484 "adrfam": "ipv4", 00:16:20.484 "trsvcid": "4420", 00:16:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:20.484 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:20.484 "hdgst": false, 00:16:20.484 "ddgst": false 00:16:20.484 }, 00:16:20.484 "method": "bdev_nvme_attach_controller" 00:16:20.484 },{ 00:16:20.484 "params": { 00:16:20.484 "name": "Nvme6", 00:16:20.484 "trtype": "tcp", 00:16:20.484 "traddr": "10.0.0.2", 00:16:20.484 "adrfam": "ipv4", 00:16:20.484 "trsvcid": "4420", 00:16:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:20.484 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:20.484 "hdgst": false, 00:16:20.484 "ddgst": false 00:16:20.484 }, 00:16:20.484 "method": "bdev_nvme_attach_controller" 00:16:20.484 },{ 00:16:20.484 "params": { 00:16:20.484 "name": "Nvme7", 00:16:20.484 "trtype": "tcp", 00:16:20.484 "traddr": "10.0.0.2", 00:16:20.484 "adrfam": "ipv4", 00:16:20.484 "trsvcid": "4420", 00:16:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:20.484 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:20.484 "hdgst": false, 00:16:20.484 "ddgst": false 00:16:20.484 }, 00:16:20.485 "method": "bdev_nvme_attach_controller" 00:16:20.485 },{ 00:16:20.485 "params": { 00:16:20.485 "name": "Nvme8", 00:16:20.485 "trtype": "tcp", 00:16:20.485 "traddr": "10.0.0.2", 00:16:20.485 "adrfam": "ipv4", 00:16:20.485 "trsvcid": "4420", 00:16:20.485 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:20.485 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:20.485 "hdgst": false, 00:16:20.485 "ddgst": false 00:16:20.485 }, 00:16:20.485 "method": "bdev_nvme_attach_controller" 00:16:20.485 },{ 00:16:20.485 "params": { 00:16:20.485 "name": "Nvme9", 00:16:20.485 "trtype": "tcp", 00:16:20.485 "traddr": "10.0.0.2", 00:16:20.485 "adrfam": "ipv4", 00:16:20.485 "trsvcid": "4420", 00:16:20.485 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:20.485 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:20.485 "hdgst": false, 00:16:20.485 "ddgst": false 00:16:20.485 }, 00:16:20.485 "method": "bdev_nvme_attach_controller" 00:16:20.485 },{ 00:16:20.485 "params": { 00:16:20.485 "name": "Nvme10", 00:16:20.485 "trtype": "tcp", 00:16:20.485 "traddr": "10.0.0.2", 00:16:20.485 "adrfam": "ipv4", 00:16:20.485 "trsvcid": "4420", 00:16:20.485 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:20.485 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:20.485 "hdgst": false, 00:16:20.485 "ddgst": false 00:16:20.485 }, 00:16:20.485 "method": "bdev_nvme_attach_controller" 00:16:20.485 }' 00:16:20.485 [2024-04-18 17:03:35.954653] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:20.485 [2024-04-18 17:03:35.954754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1710479 ] 00:16:20.485 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.485 [2024-04-18 17:03:36.017723] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.485 [2024-04-18 17:03:36.125740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.418 Running I/O for 10 seconds... 00:16:22.984 17:03:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:22.984 17:03:38 -- common/autotest_common.sh@850 -- # return 0 00:16:22.984 17:03:38 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:22.984 17:03:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.984 17:03:38 -- common/autotest_common.sh@10 -- # set +x 00:16:22.984 17:03:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.984 17:03:38 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:22.984 17:03:38 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:22.984 17:03:38 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:22.984 17:03:38 -- target/shutdown.sh@57 -- # local ret=1 00:16:22.984 17:03:38 -- target/shutdown.sh@58 -- # local i 00:16:22.984 17:03:38 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:22.984 17:03:38 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:22.984 17:03:38 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:22.984 17:03:38 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:22.984 17:03:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.984 17:03:38 -- common/autotest_common.sh@10 -- # set +x 00:16:22.984 17:03:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.984 17:03:38 -- target/shutdown.sh@60 -- # read_io_count=131 00:16:22.984 17:03:38 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:16:22.984 17:03:38 -- target/shutdown.sh@64 -- # ret=0 00:16:22.984 17:03:38 -- target/shutdown.sh@65 -- # break 00:16:22.984 17:03:38 -- target/shutdown.sh@69 -- # return 0 00:16:22.984 17:03:38 -- target/shutdown.sh@110 -- # killprocess 1710479 00:16:22.984 17:03:38 -- common/autotest_common.sh@936 -- # '[' -z 1710479 ']' 00:16:22.984 17:03:38 -- common/autotest_common.sh@940 -- # kill -0 1710479 00:16:22.984 17:03:38 -- common/autotest_common.sh@941 -- # uname 00:16:22.984 17:03:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.984 17:03:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1710479 00:16:23.242 17:03:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:23.242 17:03:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:23.242 17:03:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1710479' 00:16:23.242 killing process with pid 1710479 00:16:23.242 17:03:38 -- common/autotest_common.sh@955 -- # kill 1710479 00:16:23.242 17:03:38 -- common/autotest_common.sh@960 -- # wait 1710479 00:16:23.242 Received shutdown signal, test time was about 0.900108 seconds 00:16:23.242 00:16:23.242 Latency(us) 00:16:23.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.242 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:23.242 Verification LBA range: start 0x0 length 0x400 00:16:23.242 Nvme1n1 : 0.85 226.12 14.13 0.00 0.00 279511.55 28738.75 234570.33 00:16:23.242 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:23.242 Verification LBA range: start 0x0 length 0x400 00:16:23.242 Nvme2n1 : 0.87 219.66 13.73 0.00 0.00 281304.18 20583.16 229910.00 00:16:23.242 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:23.242 Verification LBA range: start 0x0 length 0x400 00:16:23.242 Nvme3n1 : 0.90 284.67 17.79 0.00 0.00 212980.62 18252.99 253211.69 00:16:23.242 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:23.242 Verification LBA range: start 0x0 length 0x400 00:16:23.242 Nvme4n1 : 0.89 293.26 18.33 0.00 0.00 201723.50 2815.62 253211.69 00:16:23.242 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:23.242 Verification LBA range: start 0x0 length 0x400 00:16:23.242 Nvme5n1 : 0.87 219.93 13.75 0.00 0.00 262381.16 22816.24 251658.24 00:16:23.242 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:23.242 Verification LBA range: start 0x0 length 0x400 00:16:23.242 Nvme6n1 : 0.89 215.96 13.50 0.00 0.00 262263.09 23690.05 257872.02 00:16:23.242 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:23.242 Verification LBA range: start 0x0 length 0x400 00:16:23.242 Nvme7n1 : 0.87 221.89 13.87 0.00 0.00 248482.58 36117.62 231463.44 00:16:23.242 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:23.242 Verification LBA range: start 0x0 length 0x400 00:16:23.242 Nvme8n1 : 0.85 225.10 14.07 0.00 0.00 238443.33 32234.00 246997.90 00:16:23.242 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:23.242 Verification LBA range: start 0x0 length 0x400 00:16:23.242 Nvme9n1 : 0.88 222.74 13.92 0.00 0.00 235445.14 3325.35 267192.70 00:16:23.242 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:23.242 Verification LBA range: start 0x0 length 0x400 00:16:23.242 Nvme10n1 : 0.90 214.40 13.40 0.00 0.00 240513.33 22719.15 292047.83 00:16:23.242 =================================================================================================================== 00:16:23.242 Total : 2343.72 146.48 0.00 0.00 243771.91 2815.62 292047.83 00:16:23.499 17:03:39 -- target/shutdown.sh@113 -- # sleep 1 00:16:24.431 17:03:40 -- target/shutdown.sh@114 -- # kill -0 1710173 00:16:24.431 17:03:40 -- target/shutdown.sh@116 -- # stoptarget 00:16:24.431 17:03:40 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:24.431 17:03:40 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:24.431 17:03:40 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:24.431 17:03:40 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:24.431 17:03:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:24.431 17:03:40 -- nvmf/common.sh@117 -- # sync 00:16:24.431 17:03:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:24.431 17:03:40 -- nvmf/common.sh@120 -- # set +e 00:16:24.431 17:03:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.431 17:03:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:24.431 rmmod nvme_tcp 00:16:24.431 rmmod nvme_fabrics 00:16:24.431 rmmod nvme_keyring 00:16:24.431 17:03:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:24.431 17:03:40 -- nvmf/common.sh@124 -- # set -e 00:16:24.431 17:03:40 -- nvmf/common.sh@125 -- # return 0 00:16:24.431 17:03:40 -- nvmf/common.sh@478 -- # '[' -n 1710173 ']' 00:16:24.431 17:03:40 -- nvmf/common.sh@479 -- # killprocess 1710173 00:16:24.431 17:03:40 -- common/autotest_common.sh@936 -- # '[' -z 1710173 ']' 00:16:24.431 17:03:40 -- common/autotest_common.sh@940 -- # kill -0 1710173 00:16:24.688 17:03:40 -- common/autotest_common.sh@941 -- # uname 00:16:24.688 17:03:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:24.688 17:03:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1710173 00:16:24.688 17:03:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:24.688 17:03:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:24.688 17:03:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1710173' 00:16:24.688 killing process with pid 1710173 00:16:24.688 17:03:40 -- common/autotest_common.sh@955 -- # kill 1710173 00:16:24.688 17:03:40 -- common/autotest_common.sh@960 -- # wait 1710173 00:16:25.254 17:03:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:25.254 17:03:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:25.254 17:03:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:25.254 17:03:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.254 17:03:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.254 17:03:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.254 17:03:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.254 17:03:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.154 17:03:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:27.154 00:16:27.154 real 0m8.571s 00:16:27.154 user 0m26.913s 00:16:27.154 sys 0m1.627s 00:16:27.154 17:03:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:27.154 17:03:42 -- common/autotest_common.sh@10 -- # set +x 00:16:27.154 ************************************ 00:16:27.154 END TEST nvmf_shutdown_tc2 00:16:27.154 ************************************ 00:16:27.154 17:03:42 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:16:27.154 17:03:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:27.154 17:03:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.154 17:03:42 -- common/autotest_common.sh@10 -- # set +x 00:16:27.412 ************************************ 00:16:27.412 START TEST nvmf_shutdown_tc3 00:16:27.412 ************************************ 00:16:27.412 17:03:42 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:16:27.412 17:03:42 -- target/shutdown.sh@121 -- # starttarget 00:16:27.412 17:03:42 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:27.412 17:03:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:27.412 17:03:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.412 17:03:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:27.412 17:03:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:27.412 17:03:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:27.412 17:03:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.412 17:03:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.412 17:03:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.412 17:03:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:27.412 17:03:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:27.412 17:03:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:27.412 17:03:42 -- common/autotest_common.sh@10 -- # set +x 00:16:27.412 17:03:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:27.412 17:03:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:27.412 17:03:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:27.412 17:03:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:27.412 17:03:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:27.412 17:03:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:27.412 17:03:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:27.412 17:03:42 -- nvmf/common.sh@295 -- # net_devs=() 00:16:27.412 17:03:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:27.412 17:03:42 -- nvmf/common.sh@296 -- # e810=() 00:16:27.412 17:03:42 -- nvmf/common.sh@296 -- # local -ga e810 00:16:27.412 17:03:42 -- nvmf/common.sh@297 -- # x722=() 00:16:27.412 17:03:42 -- nvmf/common.sh@297 -- # local -ga x722 00:16:27.412 17:03:42 -- nvmf/common.sh@298 -- # mlx=() 00:16:27.412 17:03:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:27.413 17:03:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.413 17:03:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.413 17:03:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.413 17:03:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.413 17:03:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.413 17:03:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.413 17:03:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.413 17:03:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.413 17:03:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.413 17:03:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.413 17:03:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.413 17:03:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:27.413 17:03:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:27.413 17:03:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:27.413 17:03:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.413 17:03:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:27.413 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:27.413 17:03:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.413 17:03:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:27.413 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:27.413 17:03:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:27.413 17:03:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.413 17:03:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.413 17:03:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:27.413 17:03:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.413 17:03:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:27.413 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:27.413 17:03:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.413 17:03:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.413 17:03:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.413 17:03:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:27.413 17:03:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.413 17:03:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:27.413 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:27.413 17:03:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.413 17:03:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:27.413 17:03:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:27.413 17:03:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:27.413 17:03:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:27.413 17:03:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.413 17:03:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.413 17:03:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.413 17:03:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:27.413 17:03:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.413 17:03:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.413 17:03:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:27.413 17:03:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.413 17:03:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.413 17:03:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:27.413 17:03:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:27.413 17:03:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.413 17:03:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.413 17:03:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.413 17:03:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.413 17:03:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:27.413 17:03:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.413 17:03:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.413 17:03:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.413 17:03:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:27.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:16:27.413 00:16:27.413 --- 10.0.0.2 ping statistics --- 00:16:27.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.413 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:16:27.413 17:03:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:16:27.413 00:16:27.413 --- 10.0.0.1 ping statistics --- 00:16:27.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.413 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:16:27.413 17:03:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.413 17:03:43 -- nvmf/common.sh@411 -- # return 0 00:16:27.413 17:03:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:27.413 17:03:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.413 17:03:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:27.413 17:03:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:27.413 17:03:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.413 17:03:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:27.413 17:03:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:27.413 17:03:43 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:27.413 17:03:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:27.413 17:03:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:27.413 17:03:43 -- common/autotest_common.sh@10 -- # set +x 00:16:27.413 17:03:43 -- nvmf/common.sh@470 -- # nvmfpid=1711405 00:16:27.413 17:03:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:27.413 17:03:43 -- nvmf/common.sh@471 -- # waitforlisten 1711405 00:16:27.413 17:03:43 -- common/autotest_common.sh@817 -- # '[' -z 1711405 ']' 00:16:27.413 17:03:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.413 17:03:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:27.413 17:03:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.413 17:03:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:27.413 17:03:43 -- common/autotest_common.sh@10 -- # set +x 00:16:27.413 [2024-04-18 17:03:43.080904] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:27.413 [2024-04-18 17:03:43.080991] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.413 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.675 [2024-04-18 17:03:43.144872] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.675 [2024-04-18 17:03:43.254703] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.675 [2024-04-18 17:03:43.254764] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.675 [2024-04-18 17:03:43.254792] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.675 [2024-04-18 17:03:43.254804] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.676 [2024-04-18 17:03:43.254814] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.676 [2024-04-18 17:03:43.254900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.676 [2024-04-18 17:03:43.254962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.676 [2024-04-18 17:03:43.255028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:27.676 [2024-04-18 17:03:43.255031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.933 17:03:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:27.933 17:03:43 -- common/autotest_common.sh@850 -- # return 0 00:16:27.933 17:03:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:27.933 17:03:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:27.933 17:03:43 -- common/autotest_common.sh@10 -- # set +x 00:16:27.933 17:03:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.933 17:03:43 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:27.933 17:03:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.933 17:03:43 -- common/autotest_common.sh@10 -- # set +x 00:16:27.933 [2024-04-18 17:03:43.416209] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.933 17:03:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.933 17:03:43 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:27.933 17:03:43 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:27.933 17:03:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:27.933 17:03:43 -- common/autotest_common.sh@10 -- # set +x 00:16:27.933 17:03:43 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:27.933 17:03:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:27.933 17:03:43 -- target/shutdown.sh@28 -- # cat 00:16:27.933 17:03:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:27.933 17:03:43 -- target/shutdown.sh@28 -- # cat 00:16:27.933 17:03:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:27.933 17:03:43 -- target/shutdown.sh@28 -- # cat 00:16:27.933 17:03:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:27.933 17:03:43 -- target/shutdown.sh@28 -- # cat 00:16:27.933 17:03:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:27.933 17:03:43 -- target/shutdown.sh@28 -- # cat 00:16:27.933 17:03:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:27.933 17:03:43 -- target/shutdown.sh@28 -- # cat 00:16:27.933 17:03:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:27.933 17:03:43 -- target/shutdown.sh@28 -- # cat 00:16:27.933 17:03:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:27.933 17:03:43 -- target/shutdown.sh@28 -- # cat 00:16:27.933 17:03:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:27.933 17:03:43 -- target/shutdown.sh@28 -- # cat 00:16:27.933 17:03:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:27.933 17:03:43 -- target/shutdown.sh@28 -- # cat 00:16:27.933 17:03:43 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:27.933 17:03:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.933 17:03:43 -- common/autotest_common.sh@10 -- # set +x 00:16:27.933 Malloc1 00:16:27.933 [2024-04-18 17:03:43.501687] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.933 Malloc2 00:16:27.933 Malloc3 00:16:27.933 Malloc4 00:16:28.191 Malloc5 00:16:28.191 Malloc6 00:16:28.191 Malloc7 00:16:28.191 Malloc8 00:16:28.191 Malloc9 00:16:28.449 Malloc10 00:16:28.449 17:03:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.449 17:03:43 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:28.449 17:03:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:28.449 17:03:43 -- common/autotest_common.sh@10 -- # set +x 00:16:28.449 17:03:43 -- target/shutdown.sh@125 -- # perfpid=1711580 00:16:28.449 17:03:43 -- target/shutdown.sh@126 -- # waitforlisten 1711580 /var/tmp/bdevperf.sock 00:16:28.449 17:03:43 -- common/autotest_common.sh@817 -- # '[' -z 1711580 ']' 00:16:28.449 17:03:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.449 17:03:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:28.449 17:03:43 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:28.449 17:03:43 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:28.449 17:03:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.449 17:03:43 -- nvmf/common.sh@521 -- # config=() 00:16:28.449 17:03:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:28.449 17:03:43 -- nvmf/common.sh@521 -- # local subsystem config 00:16:28.449 17:03:43 -- common/autotest_common.sh@10 -- # set +x 00:16:28.449 17:03:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.449 17:03:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.449 { 00:16:28.449 "params": { 00:16:28.449 "name": "Nvme$subsystem", 00:16:28.449 "trtype": "$TEST_TRANSPORT", 00:16:28.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.449 "adrfam": "ipv4", 00:16:28.449 "trsvcid": "$NVMF_PORT", 00:16:28.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.449 "hdgst": ${hdgst:-false}, 00:16:28.449 "ddgst": ${ddgst:-false} 00:16:28.449 }, 00:16:28.449 "method": "bdev_nvme_attach_controller" 00:16:28.449 } 00:16:28.449 EOF 00:16:28.449 )") 00:16:28.449 17:03:43 -- nvmf/common.sh@543 -- # cat 00:16:28.449 17:03:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.449 17:03:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.449 { 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme$subsystem", 00:16:28.450 "trtype": "$TEST_TRANSPORT", 00:16:28.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "$NVMF_PORT", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.450 "hdgst": ${hdgst:-false}, 00:16:28.450 "ddgst": ${ddgst:-false} 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 } 00:16:28.450 EOF 00:16:28.450 )") 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # cat 00:16:28.450 17:03:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.450 { 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme$subsystem", 00:16:28.450 "trtype": "$TEST_TRANSPORT", 00:16:28.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "$NVMF_PORT", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.450 "hdgst": ${hdgst:-false}, 00:16:28.450 "ddgst": ${ddgst:-false} 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 } 00:16:28.450 EOF 00:16:28.450 )") 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # cat 00:16:28.450 17:03:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.450 { 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme$subsystem", 00:16:28.450 "trtype": "$TEST_TRANSPORT", 00:16:28.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "$NVMF_PORT", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.450 "hdgst": ${hdgst:-false}, 00:16:28.450 "ddgst": ${ddgst:-false} 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 } 00:16:28.450 EOF 00:16:28.450 )") 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # cat 00:16:28.450 17:03:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.450 { 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme$subsystem", 00:16:28.450 "trtype": "$TEST_TRANSPORT", 00:16:28.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "$NVMF_PORT", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.450 "hdgst": ${hdgst:-false}, 00:16:28.450 "ddgst": ${ddgst:-false} 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 } 00:16:28.450 EOF 00:16:28.450 )") 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # cat 00:16:28.450 17:03:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.450 { 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme$subsystem", 00:16:28.450 "trtype": "$TEST_TRANSPORT", 00:16:28.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "$NVMF_PORT", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.450 "hdgst": ${hdgst:-false}, 00:16:28.450 "ddgst": ${ddgst:-false} 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 } 00:16:28.450 EOF 00:16:28.450 )") 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # cat 00:16:28.450 17:03:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.450 { 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme$subsystem", 00:16:28.450 "trtype": "$TEST_TRANSPORT", 00:16:28.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "$NVMF_PORT", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.450 "hdgst": ${hdgst:-false}, 00:16:28.450 "ddgst": ${ddgst:-false} 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 } 00:16:28.450 EOF 00:16:28.450 )") 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # cat 00:16:28.450 17:03:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.450 { 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme$subsystem", 00:16:28.450 "trtype": "$TEST_TRANSPORT", 00:16:28.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "$NVMF_PORT", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.450 "hdgst": ${hdgst:-false}, 00:16:28.450 "ddgst": ${ddgst:-false} 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 } 00:16:28.450 EOF 00:16:28.450 )") 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # cat 00:16:28.450 17:03:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.450 { 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme$subsystem", 00:16:28.450 "trtype": "$TEST_TRANSPORT", 00:16:28.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "$NVMF_PORT", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.450 "hdgst": ${hdgst:-false}, 00:16:28.450 "ddgst": ${ddgst:-false} 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 } 00:16:28.450 EOF 00:16:28.450 )") 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # cat 00:16:28.450 17:03:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.450 { 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme$subsystem", 00:16:28.450 "trtype": "$TEST_TRANSPORT", 00:16:28.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "$NVMF_PORT", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.450 "hdgst": ${hdgst:-false}, 00:16:28.450 "ddgst": ${ddgst:-false} 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 } 00:16:28.450 EOF 00:16:28.450 )") 00:16:28.450 17:03:43 -- nvmf/common.sh@543 -- # cat 00:16:28.450 17:03:44 -- nvmf/common.sh@545 -- # jq . 00:16:28.450 17:03:44 -- nvmf/common.sh@546 -- # IFS=, 00:16:28.450 17:03:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme1", 00:16:28.450 "trtype": "tcp", 00:16:28.450 "traddr": "10.0.0.2", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "4420", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.450 "hdgst": false, 00:16:28.450 "ddgst": false 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 },{ 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme2", 00:16:28.450 "trtype": "tcp", 00:16:28.450 "traddr": "10.0.0.2", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "4420", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:28.450 "hdgst": false, 00:16:28.450 "ddgst": false 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 },{ 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme3", 00:16:28.450 "trtype": "tcp", 00:16:28.450 "traddr": "10.0.0.2", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "4420", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:28.450 "hdgst": false, 00:16:28.450 "ddgst": false 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 },{ 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme4", 00:16:28.450 "trtype": "tcp", 00:16:28.450 "traddr": "10.0.0.2", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "4420", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:28.450 "hdgst": false, 00:16:28.450 "ddgst": false 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 },{ 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme5", 00:16:28.450 "trtype": "tcp", 00:16:28.450 "traddr": "10.0.0.2", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "4420", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:28.450 "hdgst": false, 00:16:28.450 "ddgst": false 00:16:28.450 }, 00:16:28.450 "method": "bdev_nvme_attach_controller" 00:16:28.450 },{ 00:16:28.450 "params": { 00:16:28.450 "name": "Nvme6", 00:16:28.450 "trtype": "tcp", 00:16:28.450 "traddr": "10.0.0.2", 00:16:28.450 "adrfam": "ipv4", 00:16:28.450 "trsvcid": "4420", 00:16:28.450 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:28.450 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:28.451 "hdgst": false, 00:16:28.451 "ddgst": false 00:16:28.451 }, 00:16:28.451 "method": "bdev_nvme_attach_controller" 00:16:28.451 },{ 00:16:28.451 "params": { 00:16:28.451 "name": "Nvme7", 00:16:28.451 "trtype": "tcp", 00:16:28.451 "traddr": "10.0.0.2", 00:16:28.451 "adrfam": "ipv4", 00:16:28.451 "trsvcid": "4420", 00:16:28.451 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:28.451 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:28.451 "hdgst": false, 00:16:28.451 "ddgst": false 00:16:28.451 }, 00:16:28.451 "method": "bdev_nvme_attach_controller" 00:16:28.451 },{ 00:16:28.451 "params": { 00:16:28.451 "name": "Nvme8", 00:16:28.451 "trtype": "tcp", 00:16:28.451 "traddr": "10.0.0.2", 00:16:28.451 "adrfam": "ipv4", 00:16:28.451 "trsvcid": "4420", 00:16:28.451 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:28.451 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:28.451 "hdgst": false, 00:16:28.451 "ddgst": false 00:16:28.451 }, 00:16:28.451 "method": "bdev_nvme_attach_controller" 00:16:28.451 },{ 00:16:28.451 "params": { 00:16:28.451 "name": "Nvme9", 00:16:28.451 "trtype": "tcp", 00:16:28.451 "traddr": "10.0.0.2", 00:16:28.451 "adrfam": "ipv4", 00:16:28.451 "trsvcid": "4420", 00:16:28.451 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:28.451 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:28.451 "hdgst": false, 00:16:28.451 "ddgst": false 00:16:28.451 }, 00:16:28.451 "method": "bdev_nvme_attach_controller" 00:16:28.451 },{ 00:16:28.451 "params": { 00:16:28.451 "name": "Nvme10", 00:16:28.451 "trtype": "tcp", 00:16:28.451 "traddr": "10.0.0.2", 00:16:28.451 "adrfam": "ipv4", 00:16:28.451 "trsvcid": "4420", 00:16:28.451 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:28.451 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:28.451 "hdgst": false, 00:16:28.451 "ddgst": false 00:16:28.451 }, 00:16:28.451 "method": "bdev_nvme_attach_controller" 00:16:28.451 }' 00:16:28.451 [2024-04-18 17:03:44.011465] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:28.451 [2024-04-18 17:03:44.011543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711580 ] 00:16:28.451 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.451 [2024-04-18 17:03:44.075316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.708 [2024-04-18 17:03:44.184051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.605 Running I/O for 10 seconds... 00:16:30.605 17:03:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:30.605 17:03:46 -- common/autotest_common.sh@850 -- # return 0 00:16:30.605 17:03:46 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:30.605 17:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.605 17:03:46 -- common/autotest_common.sh@10 -- # set +x 00:16:30.605 17:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.605 17:03:46 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:30.605 17:03:46 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:30.605 17:03:46 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:30.605 17:03:46 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:30.605 17:03:46 -- target/shutdown.sh@57 -- # local ret=1 00:16:30.605 17:03:46 -- target/shutdown.sh@58 -- # local i 00:16:30.605 17:03:46 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:30.605 17:03:46 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:30.605 17:03:46 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:30.606 17:03:46 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:30.606 17:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.606 17:03:46 -- common/autotest_common.sh@10 -- # set +x 00:16:30.606 17:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.606 17:03:46 -- target/shutdown.sh@60 -- # read_io_count=3 00:16:30.606 17:03:46 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:16:30.606 17:03:46 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:30.863 17:03:46 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:30.863 17:03:46 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:30.863 17:03:46 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:30.863 17:03:46 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:30.863 17:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.863 17:03:46 -- common/autotest_common.sh@10 -- # set +x 00:16:30.863 17:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.863 17:03:46 -- target/shutdown.sh@60 -- # read_io_count=67 00:16:30.863 17:03:46 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:16:30.863 17:03:46 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:31.139 17:03:46 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:31.139 17:03:46 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:31.139 17:03:46 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:31.139 17:03:46 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:31.139 17:03:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.139 17:03:46 -- common/autotest_common.sh@10 -- # set +x 00:16:31.139 17:03:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.139 17:03:46 -- target/shutdown.sh@60 -- # read_io_count=135 00:16:31.139 17:03:46 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:16:31.139 17:03:46 -- target/shutdown.sh@64 -- # ret=0 00:16:31.139 17:03:46 -- target/shutdown.sh@65 -- # break 00:16:31.139 17:03:46 -- target/shutdown.sh@69 -- # return 0 00:16:31.139 17:03:46 -- target/shutdown.sh@135 -- # killprocess 1711405 00:16:31.139 17:03:46 -- common/autotest_common.sh@936 -- # '[' -z 1711405 ']' 00:16:31.139 17:03:46 -- common/autotest_common.sh@940 -- # kill -0 1711405 00:16:31.139 17:03:46 -- common/autotest_common.sh@941 -- # uname 00:16:31.139 17:03:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:31.139 17:03:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1711405 00:16:31.139 17:03:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:31.139 17:03:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:31.139 17:03:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1711405' 00:16:31.139 killing process with pid 1711405 00:16:31.139 17:03:46 -- common/autotest_common.sh@955 -- # kill 1711405 00:16:31.139 17:03:46 -- common/autotest_common.sh@960 -- # wait 1711405 00:16:31.139 [2024-04-18 17:03:46.732459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.732990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.139 [2024-04-18 17:03:46.733032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.139 [2024-04-18 17:03:46.733059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.139 [2024-04-18 17:03:46.733072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.139 [2024-04-18 17:03:46.733085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.139 [2024-04-18 17:03:46.733098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.139 [2024-04-18 17:03:46.733111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-04-18 17:03:46.733124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with id:0 cdw10:00000000 cdw11:00000000 00:16:31.139 the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-04-18 17:03:46.733139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.139 the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaf870 is same w[2024-04-18 17:03:46.733154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with ith the state(5) to be set 00:16:31.139 the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.139 [2024-04-18 17:03:46.733237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.733250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.733262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.733274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.733287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.733299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.733312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.733324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.733337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.733349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.733361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1700 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.140 [2024-04-18 17:03:46.735886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with [2024-04-18 17:03:46.735899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:16:31.140 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.140 [2024-04-18 17:03:46.735913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.140 [2024-04-18 17:03:46.735939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.140 [2024-04-18 17:03:46.735952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.140 [2024-04-18 17:03:46.735965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.140 [2024-04-18 17:03:46.735982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.735994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12[2024-04-18 17:03:46.735996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.140 the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.736010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.736010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.140 the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.736028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:12[2024-04-18 17:03:46.736026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.140 the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.736050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.736050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.140 the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.736066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.736068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.140 [2024-04-18 17:03:46.736079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.736082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.140 [2024-04-18 17:03:46.736092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.736098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.140 [2024-04-18 17:03:46.736104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.140 [2024-04-18 17:03:46.736112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.141 [2024-04-18 17:03:46.736127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.141 [2024-04-18 17:03:46.736141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with the state(5) to be set 00:16:31.141 [2024-04-18 17:03:46.736156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf280 is same with [2024-04-18 17:03:46.736156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:1the state(5) to be set 00:16:31.141 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.736978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.736991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.737006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.737019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.737043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.737057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.737072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.737085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.737100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.737113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.737128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.737142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.737157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.737170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.737186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.141 [2024-04-18 17:03:46.737199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.141 [2024-04-18 17:03:46.737213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.737795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.737899] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1276dc0 was disconnected and freed. reset controller. 00:16:31.142 [2024-04-18 17:03:46.738466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.738975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.738988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.142 [2024-04-18 17:03:46.739003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.142 [2024-04-18 17:03:46.739016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with [2024-04-18 17:03:46.739414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1the state(5) to be set 00:16:31.143 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with [2024-04-18 17:03:46.739431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:16:31.143 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with [2024-04-18 17:03:46.739453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1the state(5) to be set 00:16:31.143 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1[2024-04-18 17:03:46.739606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.739620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.143 [2024-04-18 17:03:46.739674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.143 [2024-04-18 17:03:46.739685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.143 [2024-04-18 17:03:46.739687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with [2024-04-18 17:03:46.739701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1the state(5) to be set 00:16:31.144 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.739715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with [2024-04-18 17:03:46.739716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:16:31.144 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.739730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.739742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.739755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.739768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.739782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.739794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.739808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.739829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.739842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-04-18 17:03:46.739855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.739869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1[2024-04-18 17:03:46.739896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with [2024-04-18 17:03:46.739910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:16:31.144 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.739925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.739938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.739951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.739964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.739976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.739987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-04-18 17:03:46.739989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.740003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.740030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with [2024-04-18 17:03:46.740034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:16:31.144 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.740049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.740063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.740076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.740088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.740101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.740113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.740127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.740152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.740165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.740177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.740190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.740203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.740215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdfba0 is same with the state(5) to be set 00:16:31.144 [2024-04-18 17:03:46.740234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.740248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.740263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.740277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.740291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.740305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.144 [2024-04-18 17:03:46.740319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.144 [2024-04-18 17:03:46.740332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.145 [2024-04-18 17:03:46.740347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.145 [2024-04-18 17:03:46.740367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.145 [2024-04-18 17:03:46.740394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.145 [2024-04-18 17:03:46.740410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.145 [2024-04-18 17:03:46.740456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:16:31.145 [2024-04-18 17:03:46.740533] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1279580 was disconnected and freed. reset controller. 00:16:31.145 [2024-04-18 17:03:46.741084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.741989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.742001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.742013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.742025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.742037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.742049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.742061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0030 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.743194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce04c0 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.743231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce04c0 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.743246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce04c0 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.743258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce04c0 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.744279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.145 [2024-04-18 17:03:46.744307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.145 [2024-04-18 17:03:46.744328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.145 [2024-04-18 17:03:46.744343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.145 [2024-04-18 17:03:46.744359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.145 [2024-04-18 17:03:46.744373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.145 [2024-04-18 17:03:46.744503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.145 [2024-04-18 17:03:46.744499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.145 [2024-04-18 17:03:46.744524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.145 [2024-04-18 17:03:46.744540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1[2024-04-18 17:03:46.744540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.145 the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.744557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.744587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.744587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with [2024-04-18 17:03:46.744604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1the state(5) to be set 00:16:31.146 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.744619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with [2024-04-18 17:03:46.744621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:16:31.146 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.744633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.744646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.744659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with [2024-04-18 17:03:46.744673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1the state(5) to be set 00:16:31.146 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.744686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with [2024-04-18 17:03:46.744687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:16:31.146 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.744700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.744713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.744725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.744738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.744750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1[2024-04-18 17:03:46.744763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.744778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.744805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.744817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.744830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.744843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.744870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.744882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.744895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.744907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.744920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.744933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1[2024-04-18 17:03:46.744946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.744961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with [2024-04-18 17:03:46.744977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1the state(5) to be set 00:16:31.146 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.744990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.744992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.745002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.745007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.745015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.745024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.745028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.745040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-04-18 17:03:46.745040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.745055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.745056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.745070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.745071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.745082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.745085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.745095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.745100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.745107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.745113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.146 [2024-04-18 17:03:46.745120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.146 [2024-04-18 17:03:46.745128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.146 [2024-04-18 17:03:46.745132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1[2024-04-18 17:03:46.745158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.745172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with [2024-04-18 17:03:46.745250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1the state(5) to be set 00:16:31.147 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with [2024-04-18 17:03:46.745265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:16:31.147 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1[2024-04-18 17:03:46.745340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-18 17:03:46.745354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 the state(5) to be set 00:16:31.147 [2024-04-18 17:03:46.745369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.745986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.745999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.746013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.746026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.147 [2024-04-18 17:03:46.746040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.147 [2024-04-18 17:03:46.746053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.148 [2024-04-18 17:03:46.746081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.148 [2024-04-18 17:03:46.746109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.148 [2024-04-18 17:03:46.746140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.148 [2024-04-18 17:03:46.746168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.148 [2024-04-18 17:03:46.746196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.148 [2024-04-18 17:03:46.746224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.148 [2024-04-18 17:03:46.746251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.148 [2024-04-18 17:03:46.746279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.148 [2024-04-18 17:03:46.746307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:16:31.148 [2024-04-18 17:03:46.746418] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x127cc40 was disconnected and freed. reset controller. 00:16:31.148 [2024-04-18 17:03:46.746478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746699] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:31.148 [2024-04-18 17:03:46.746712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:16:31.148 [2024-04-18 17:03:46.746734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746790] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c5380 (9): Bad file descriptor 00:16:31.148 [2024-04-18 17:03:46.746798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746817] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdaf870 (9): Bad file descriptor 00:16:31.148 [2024-04-18 17:03:46.746829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-04-18 17:03:46.746866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with id:0 cdw10:00000000 cdw11:00000000 00:16:31.148 the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.148 [2024-04-18 17:03:46.746903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.148 [2024-04-18 17:03:46.746932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.148 [2024-04-18 17:03:46.746958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.148 [2024-04-18 17:03:46.746970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1357150 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.746995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.747006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.747018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.148 [2024-04-18 17:03:46.747023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.148 [2024-04-18 17:03:46.747030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with [2024-04-18 17:03:46.747043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:16:31.149 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-04-18 17:03:46.747118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with id:0 cdw10:00000000 cdw11:00000000 00:16:31.149 the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with [2024-04-18 17:03:46.747134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:16:31.149 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with [2024-04-18 17:03:46.747149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137ac00 is same the state(5) to be set 00:16:31.149 with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-04-18 17:03:46.747235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with id:0 cdw10:00000000 cdw11:00000000 00:16:31.149 the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-04-18 17:03:46.747301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with id:0 cdw10:00000000 cdw11:00000000 00:16:31.149 the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-04-18 17:03:46.747314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with [2024-04-18 17:03:46.747332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee10 is same the state(5) to be set 00:16:31.149 with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce0de0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ccc0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaf560 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13661f0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.747878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.149 [2024-04-18 17:03:46.747978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.149 [2024-04-18 17:03:46.747990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e58f0 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.748109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.748134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.748155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.149 [2024-04-18 17:03:46.748167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.748911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce1270 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.749640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:16:31.150 [2024-04-18 17:03:46.749673] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13661f0 (9): Bad file descriptor 00:16:31.150 [2024-04-18 17:03:46.750680] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:31.150 [2024-04-18 17:03:46.750872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.150 [2024-04-18 17:03:46.750993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.150 [2024-04-18 17:03:46.751019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdaf870 with addr=10.0.0.2, port=4420 00:16:31.150 [2024-04-18 17:03:46.751035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaf870 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.751166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.150 [2024-04-18 17:03:46.751306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.150 [2024-04-18 17:03:46.751330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c5380 with addr=10.0.0.2, port=4420 00:16:31.150 [2024-04-18 17:03:46.751345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5380 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.751457] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:31.150 [2024-04-18 17:03:46.752186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.150 [2024-04-18 17:03:46.752331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.150 [2024-04-18 17:03:46.752355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13661f0 with addr=10.0.0.2, port=4420 00:16:31.150 [2024-04-18 17:03:46.752370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13661f0 is same with the state(5) to be set 00:16:31.150 [2024-04-18 17:03:46.752399] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdaf870 (9): Bad file descriptor 00:16:31.150 [2024-04-18 17:03:46.752421] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c5380 (9): Bad file descriptor 00:16:31.150 [2024-04-18 17:03:46.752534] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:31.150 [2024-04-18 17:03:46.752603] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:31.150 [2024-04-18 17:03:46.752682] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:31.150 [2024-04-18 17:03:46.752836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13661f0 (9): Bad file descriptor 00:16:31.150 [2024-04-18 17:03:46.752861] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:31.150 [2024-04-18 17:03:46.752875] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:31.150 [2024-04-18 17:03:46.752889] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:31.150 [2024-04-18 17:03:46.752910] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:16:31.150 [2024-04-18 17:03:46.752924] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:16:31.150 [2024-04-18 17:03:46.752936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:16:31.150 [2024-04-18 17:03:46.753046] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:31.150 [2024-04-18 17:03:46.753143] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.151 [2024-04-18 17:03:46.753166] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.151 [2024-04-18 17:03:46.753179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:16:31.151 [2024-04-18 17:03:46.753192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:16:31.151 [2024-04-18 17:03:46.753204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:16:31.151 [2024-04-18 17:03:46.753295] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:31.151 [2024-04-18 17:03:46.753327] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.151 [2024-04-18 17:03:46.756735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1357150 (9): Bad file descriptor 00:16:31.151 [2024-04-18 17:03:46.756781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x137ac00 (9): Bad file descriptor 00:16:31.151 [2024-04-18 17:03:46.756813] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134ee10 (9): Bad file descriptor 00:16:31.151 [2024-04-18 17:03:46.756844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138ccc0 (9): Bad file descriptor 00:16:31.151 [2024-04-18 17:03:46.756873] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdaf560 (9): Bad file descriptor 00:16:31.151 [2024-04-18 17:03:46.756931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.151 [2024-04-18 17:03:46.756953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.756971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.151 [2024-04-18 17:03:46.756984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.756997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.151 [2024-04-18 17:03:46.757010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.757023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.151 [2024-04-18 17:03:46.757036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.757049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1385a20 is same with the state(5) to be set 00:16:31.151 [2024-04-18 17:03:46.757075] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e58f0 (9): Bad file descriptor 00:16:31.151 [2024-04-18 17:03:46.759849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:16:31.151 [2024-04-18 17:03:46.759878] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:31.151 [2024-04-18 17:03:46.760085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.151 [2024-04-18 17:03:46.760198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.151 [2024-04-18 17:03:46.760223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c5380 with addr=10.0.0.2, port=4420 00:16:31.151 [2024-04-18 17:03:46.760239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5380 is same with the state(5) to be set 00:16:31.151 [2024-04-18 17:03:46.760347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.151 [2024-04-18 17:03:46.760461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.151 [2024-04-18 17:03:46.760486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdaf870 with addr=10.0.0.2, port=4420 00:16:31.151 [2024-04-18 17:03:46.760507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaf870 is same with the state(5) to be set 00:16:31.151 [2024-04-18 17:03:46.760563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c5380 (9): Bad file descriptor 00:16:31.151 [2024-04-18 17:03:46.760586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdaf870 (9): Bad file descriptor 00:16:31.151 [2024-04-18 17:03:46.760638] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:16:31.151 [2024-04-18 17:03:46.760654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:16:31.151 [2024-04-18 17:03:46.760681] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:16:31.151 [2024-04-18 17:03:46.760702] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:31.151 [2024-04-18 17:03:46.760715] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:31.151 [2024-04-18 17:03:46.760727] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:31.151 [2024-04-18 17:03:46.760782] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.151 [2024-04-18 17:03:46.760800] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.151 [2024-04-18 17:03:46.761476] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:16:31.151 [2024-04-18 17:03:46.761635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.151 [2024-04-18 17:03:46.761745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.151 [2024-04-18 17:03:46.761769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13661f0 with addr=10.0.0.2, port=4420 00:16:31.151 [2024-04-18 17:03:46.761784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13661f0 is same with the state(5) to be set 00:16:31.151 [2024-04-18 17:03:46.761839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13661f0 (9): Bad file descriptor 00:16:31.151 [2024-04-18 17:03:46.761894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:16:31.151 [2024-04-18 17:03:46.761910] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:16:31.151 [2024-04-18 17:03:46.761923] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:16:31.151 [2024-04-18 17:03:46.761978] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.151 [2024-04-18 17:03:46.766806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1385a20 (9): Bad file descriptor 00:16:31.151 [2024-04-18 17:03:46.767000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.151 [2024-04-18 17:03:46.767517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.151 [2024-04-18 17:03:46.767530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.767979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.767992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.152 [2024-04-18 17:03:46.768538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.152 [2024-04-18 17:03:46.768553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.768914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.768929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278120 is same with the state(5) to be set 00:16:31.153 [2024-04-18 17:03:46.770217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.770973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.153 [2024-04-18 17:03:46.770987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.153 [2024-04-18 17:03:46.771002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.771975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.771988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.772003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.772016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.772032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.772045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.772061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.772075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.772091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.772104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.772118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127aa70 is same with the state(5) to be set 00:16:31.154 [2024-04-18 17:03:46.773350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.154 [2024-04-18 17:03:46.773374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.154 [2024-04-18 17:03:46.773401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.773972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.773988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.155 [2024-04-18 17:03:46.774562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.155 [2024-04-18 17:03:46.774578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.774979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.774992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.775007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.775020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.775035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.775048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.775064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.775077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.775092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.775105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.775120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.775133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.775148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.775162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.775177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.775191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.775206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.775219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.775233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127bed0 is same with the state(5) to be set 00:16:31.156 [2024-04-18 17:03:46.776475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.156 [2024-04-18 17:03:46.776882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.156 [2024-04-18 17:03:46.776897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.776910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.776926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.776939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.776954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.776967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.776983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.776996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.157 [2024-04-18 17:03:46.777955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.157 [2024-04-18 17:03:46.777971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.777985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.778013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.778042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.778070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.778098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.778127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.778156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.778184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.778213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.778242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.778271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.778299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.778332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.778346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcc50 is same with the state(5) to be set 00:16:31.158 [2024-04-18 17:03:46.779580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.779973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.779987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.158 [2024-04-18 17:03:46.780351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.158 [2024-04-18 17:03:46.780364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.780981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.780996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.781453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.781468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11be100 is same with the state(5) to be set 00:16:31.159 [2024-04-18 17:03:46.782718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.782740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.782761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.159 [2024-04-18 17:03:46.782776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.159 [2024-04-18 17:03:46.782791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.782805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.782820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.782834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.782850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.782864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.782879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.782892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.782908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.782922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.782937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.782951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.782966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.782979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.782999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.160 [2024-04-18 17:03:46.783930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.160 [2024-04-18 17:03:46.783943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.783959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.783972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.783987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.784595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.784609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127e3f0 is same with the state(5) to be set 00:16:31.161 [2024-04-18 17:03:46.786304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:16:31.161 [2024-04-18 17:03:46.786335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:16:31.161 [2024-04-18 17:03:46.786353] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:16:31.161 [2024-04-18 17:03:46.786371] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:16:31.161 [2024-04-18 17:03:46.786496] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:31.161 [2024-04-18 17:03:46.786523] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:31.161 [2024-04-18 17:03:46.786638] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:16:31.161 [2024-04-18 17:03:46.786664] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:16:31.161 [2024-04-18 17:03:46.786979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.161 [2024-04-18 17:03:46.787122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.161 [2024-04-18 17:03:46.787147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138ccc0 with addr=10.0.0.2, port=4420 00:16:31.161 [2024-04-18 17:03:46.787165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ccc0 is same with the state(5) to be set 00:16:31.161 [2024-04-18 17:03:46.787272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.161 [2024-04-18 17:03:46.787409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.161 [2024-04-18 17:03:46.787433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdaf560 with addr=10.0.0.2, port=4420 00:16:31.161 [2024-04-18 17:03:46.787448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaf560 is same with the state(5) to be set 00:16:31.161 [2024-04-18 17:03:46.787548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.161 [2024-04-18 17:03:46.787657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.161 [2024-04-18 17:03:46.787680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e58f0 with addr=10.0.0.2, port=4420 00:16:31.161 [2024-04-18 17:03:46.787701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e58f0 is same with the state(5) to be set 00:16:31.161 [2024-04-18 17:03:46.787802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.161 [2024-04-18 17:03:46.787966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.161 [2024-04-18 17:03:46.787989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1357150 with addr=10.0.0.2, port=4420 00:16:31.161 [2024-04-18 17:03:46.788004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1357150 is same with the state(5) to be set 00:16:31.161 [2024-04-18 17:03:46.789404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.789428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.161 [2024-04-18 17:03:46.789455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.161 [2024-04-18 17:03:46.789471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.789986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.789999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.162 [2024-04-18 17:03:46.790634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.162 [2024-04-18 17:03:46.790647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.790663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.790677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.790691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.790706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.790721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.790734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.790750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.790763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.790779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.790793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.790809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.790822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.790842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.790856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.790871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.790884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.790900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.790913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.790928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.790941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.790957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.790970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.790985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.790999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.791014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.791027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.791042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.791056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.791071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.791085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.791100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.791114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.791129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.791142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.791157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.791171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.791186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.791203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.791219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.791232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.791248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.791262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.791277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.163 [2024-04-18 17:03:46.791291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.163 [2024-04-18 17:03:46.791305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cfa0 is same with the state(5) to be set 00:16:31.163 [2024-04-18 17:03:46.793311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:31.163 [2024-04-18 17:03:46.793343] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:16:31.163 [2024-04-18 17:03:46.793362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:16:31.163 task offset: 25088 on job bdev=Nvme1n1 fails 00:16:31.163 00:16:31.163 Latency(us) 00:16:31.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.163 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.163 Job: Nvme1n1 ended in about 0.89 seconds with error 00:16:31.163 Verification LBA range: start 0x0 length 0x400 00:16:31.163 Nvme1n1 : 0.89 215.04 13.44 71.68 0.00 220687.93 29709.65 242337.56 00:16:31.163 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.163 Job: Nvme2n1 ended in about 0.92 seconds with error 00:16:31.163 Verification LBA range: start 0x0 length 0x400 00:16:31.163 Nvme2n1 : 0.92 143.46 8.97 69.56 0.00 291221.33 18641.35 253211.69 00:16:31.163 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.163 Job: Nvme3n1 ended in about 0.89 seconds with error 00:16:31.163 Verification LBA range: start 0x0 length 0x400 00:16:31.163 Nvme3n1 : 0.89 214.77 13.42 71.59 0.00 211808.81 4684.61 253211.69 00:16:31.163 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.163 Job: Nvme4n1 ended in about 0.92 seconds with error 00:16:31.163 Verification LBA range: start 0x0 length 0x400 00:16:31.163 Nvme4n1 : 0.92 207.96 13.00 69.32 0.00 214520.98 17379.18 248551.35 00:16:31.163 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.163 Job: Nvme5n1 ended in about 0.93 seconds with error 00:16:31.163 Verification LBA range: start 0x0 length 0x400 00:16:31.163 Nvme5n1 : 0.93 138.18 8.64 69.09 0.00 281161.07 21262.79 253211.69 00:16:31.163 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.163 Job: Nvme6n1 ended in about 0.90 seconds with error 00:16:31.163 Verification LBA range: start 0x0 length 0x400 00:16:31.163 Nvme6n1 : 0.90 213.48 13.34 71.16 0.00 199442.54 4320.52 250104.79 00:16:31.163 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.163 Job: Nvme7n1 ended in about 0.93 seconds with error 00:16:31.163 Verification LBA range: start 0x0 length 0x400 00:16:31.163 Nvme7n1 : 0.93 137.71 8.61 68.86 0.00 270045.23 38253.61 236123.78 00:16:31.163 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.163 Job: Nvme8n1 ended in about 0.93 seconds with error 00:16:31.163 Verification LBA range: start 0x0 length 0x400 00:16:31.163 Nvme8n1 : 0.93 137.26 8.58 68.63 0.00 265006.90 15728.64 282727.16 00:16:31.163 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.163 Job: Nvme9n1 ended in about 0.94 seconds with error 00:16:31.163 Verification LBA range: start 0x0 length 0x400 00:16:31.163 Nvme9n1 : 0.94 135.82 8.49 67.91 0.00 262186.48 23787.14 282727.16 00:16:31.163 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.163 Job: Nvme10n1 ended in about 0.94 seconds with error 00:16:31.163 Verification LBA range: start 0x0 length 0x400 00:16:31.163 Nvme10n1 : 0.94 136.80 8.55 68.40 0.00 254097.38 19029.71 256318.58 00:16:31.163 =================================================================================================================== 00:16:31.163 Total : 1680.47 105.03 696.19 0.00 242941.58 4320.52 282727.16 00:16:31.163 [2024-04-18 17:03:46.820013] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:31.163 [2024-04-18 17:03:46.820102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:16:31.163 [2024-04-18 17:03:46.820463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.163 [2024-04-18 17:03:46.820594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.163 [2024-04-18 17:03:46.820621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x137ac00 with addr=10.0.0.2, port=4420 00:16:31.163 [2024-04-18 17:03:46.820641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137ac00 is same with the state(5) to be set 00:16:31.163 [2024-04-18 17:03:46.820745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.163 [2024-04-18 17:03:46.820858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.164 [2024-04-18 17:03:46.820884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134ee10 with addr=10.0.0.2, port=4420 00:16:31.164 [2024-04-18 17:03:46.820899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee10 is same with the state(5) to be set 00:16:31.164 [2024-04-18 17:03:46.820926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138ccc0 (9): Bad file descriptor 00:16:31.164 [2024-04-18 17:03:46.820948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdaf560 (9): Bad file descriptor 00:16:31.164 [2024-04-18 17:03:46.820966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e58f0 (9): Bad file descriptor 00:16:31.164 [2024-04-18 17:03:46.820983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1357150 (9): Bad file descriptor 00:16:31.164 [2024-04-18 17:03:46.821307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.164 [2024-04-18 17:03:46.821440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.164 [2024-04-18 17:03:46.821466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdaf870 with addr=10.0.0.2, port=4420 00:16:31.164 [2024-04-18 17:03:46.821483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaf870 is same with the state(5) to be set 00:16:31.164 [2024-04-18 17:03:46.821616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.164 [2024-04-18 17:03:46.821725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.164 [2024-04-18 17:03:46.821749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c5380 with addr=10.0.0.2, port=4420 00:16:31.164 [2024-04-18 17:03:46.821764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5380 is same with the state(5) to be set 00:16:31.164 [2024-04-18 17:03:46.821879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.164 [2024-04-18 17:03:46.821975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.164 [2024-04-18 17:03:46.822010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13661f0 with addr=10.0.0.2, port=4420 00:16:31.164 [2024-04-18 17:03:46.822025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13661f0 is same with the state(5) to be set 00:16:31.164 [2024-04-18 17:03:46.822118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.164 [2024-04-18 17:03:46.822218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:31.164 [2024-04-18 17:03:46.822242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1385a20 with addr=10.0.0.2, port=4420 00:16:31.164 [2024-04-18 17:03:46.822256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1385a20 is same with the state(5) to be set 00:16:31.164 [2024-04-18 17:03:46.822273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x137ac00 (9): Bad file descriptor 00:16:31.164 [2024-04-18 17:03:46.822292] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134ee10 (9): Bad file descriptor 00:16:31.164 [2024-04-18 17:03:46.822308] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:31.164 [2024-04-18 17:03:46.822321] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:16:31.164 [2024-04-18 17:03:46.822336] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:31.164 [2024-04-18 17:03:46.822358] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:16:31.164 [2024-04-18 17:03:46.822373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:16:31.164 [2024-04-18 17:03:46.822392] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:16:31.164 [2024-04-18 17:03:46.822409] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:16:31.164 [2024-04-18 17:03:46.822433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:16:31.164 [2024-04-18 17:03:46.822445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:16:31.164 [2024-04-18 17:03:46.822461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:16:31.164 [2024-04-18 17:03:46.822475] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:16:31.164 [2024-04-18 17:03:46.822486] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:16:31.164 [2024-04-18 17:03:46.822515] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:31.164 [2024-04-18 17:03:46.822535] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:31.164 [2024-04-18 17:03:46.822553] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:31.164 [2024-04-18 17:03:46.822570] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:31.164 [2024-04-18 17:03:46.822589] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:31.164 [2024-04-18 17:03:46.822607] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:31.164 [2024-04-18 17:03:46.823018] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.164 [2024-04-18 17:03:46.823043] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.164 [2024-04-18 17:03:46.823056] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.164 [2024-04-18 17:03:46.823067] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.164 [2024-04-18 17:03:46.823088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdaf870 (9): Bad file descriptor 00:16:31.164 [2024-04-18 17:03:46.823107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c5380 (9): Bad file descriptor 00:16:31.164 [2024-04-18 17:03:46.823125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13661f0 (9): Bad file descriptor 00:16:31.164 [2024-04-18 17:03:46.823141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1385a20 (9): Bad file descriptor 00:16:31.164 [2024-04-18 17:03:46.823156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:16:31.164 [2024-04-18 17:03:46.823168] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:16:31.164 [2024-04-18 17:03:46.823180] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:16:31.164 [2024-04-18 17:03:46.823197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:16:31.164 [2024-04-18 17:03:46.823210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:16:31.164 [2024-04-18 17:03:46.823222] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:16:31.164 [2024-04-18 17:03:46.823572] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.164 [2024-04-18 17:03:46.823597] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.164 [2024-04-18 17:03:46.823610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:31.164 [2024-04-18 17:03:46.823622] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:31.164 [2024-04-18 17:03:46.823635] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:31.164 [2024-04-18 17:03:46.823651] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:16:31.164 [2024-04-18 17:03:46.823664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:16:31.164 [2024-04-18 17:03:46.823676] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:16:31.164 [2024-04-18 17:03:46.823690] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:16:31.164 [2024-04-18 17:03:46.823703] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:16:31.164 [2024-04-18 17:03:46.823715] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:16:31.164 [2024-04-18 17:03:46.823730] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:16:31.164 [2024-04-18 17:03:46.823743] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:16:31.164 [2024-04-18 17:03:46.823754] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:16:31.164 [2024-04-18 17:03:46.823803] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.164 [2024-04-18 17:03:46.823822] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.164 [2024-04-18 17:03:46.823834] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.164 [2024-04-18 17:03:46.823845] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.731 17:03:47 -- target/shutdown.sh@136 -- # nvmfpid= 00:16:31.731 17:03:47 -- target/shutdown.sh@139 -- # sleep 1 00:16:32.669 17:03:48 -- target/shutdown.sh@142 -- # kill -9 1711580 00:16:32.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1711580) - No such process 00:16:32.669 17:03:48 -- target/shutdown.sh@142 -- # true 00:16:32.669 17:03:48 -- target/shutdown.sh@144 -- # stoptarget 00:16:32.669 17:03:48 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:32.669 17:03:48 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:32.669 17:03:48 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:32.669 17:03:48 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:32.669 17:03:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:32.669 17:03:48 -- nvmf/common.sh@117 -- # sync 00:16:32.669 17:03:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:32.669 17:03:48 -- nvmf/common.sh@120 -- # set +e 00:16:32.669 17:03:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.669 17:03:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:32.669 rmmod nvme_tcp 00:16:32.669 rmmod nvme_fabrics 00:16:32.928 rmmod nvme_keyring 00:16:32.928 17:03:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.928 17:03:48 -- nvmf/common.sh@124 -- # set -e 00:16:32.928 17:03:48 -- nvmf/common.sh@125 -- # return 0 00:16:32.928 17:03:48 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:32.928 17:03:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:32.928 17:03:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:32.928 17:03:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:32.928 17:03:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:32.928 17:03:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:32.928 17:03:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.928 17:03:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.928 17:03:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.833 17:03:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.833 00:16:34.833 real 0m7.565s 00:16:34.833 user 0m18.588s 00:16:34.833 sys 0m1.454s 00:16:34.833 17:03:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:34.833 17:03:50 -- common/autotest_common.sh@10 -- # set +x 00:16:34.833 ************************************ 00:16:34.833 END TEST nvmf_shutdown_tc3 00:16:34.833 ************************************ 00:16:34.833 17:03:50 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:16:34.833 00:16:34.833 real 0m28.299s 00:16:34.833 user 1m19.249s 00:16:34.833 sys 0m6.490s 00:16:34.833 17:03:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:34.833 17:03:50 -- common/autotest_common.sh@10 -- # set +x 00:16:34.833 ************************************ 00:16:34.833 END TEST nvmf_shutdown 00:16:34.833 ************************************ 00:16:34.833 17:03:50 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:16:34.833 17:03:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:34.833 17:03:50 -- common/autotest_common.sh@10 -- # set +x 00:16:34.833 17:03:50 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:16:34.833 17:03:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:34.833 17:03:50 -- common/autotest_common.sh@10 -- # set +x 00:16:34.833 17:03:50 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:16:34.833 17:03:50 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:34.833 17:03:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:34.833 17:03:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:34.833 17:03:50 -- common/autotest_common.sh@10 -- # set +x 00:16:35.092 ************************************ 00:16:35.092 START TEST nvmf_multicontroller 00:16:35.092 ************************************ 00:16:35.092 17:03:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:35.092 * Looking for test storage... 00:16:35.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:35.092 17:03:50 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.092 17:03:50 -- nvmf/common.sh@7 -- # uname -s 00:16:35.092 17:03:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.092 17:03:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.092 17:03:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.092 17:03:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.092 17:03:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.092 17:03:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.092 17:03:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.092 17:03:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.092 17:03:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.092 17:03:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.092 17:03:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.092 17:03:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.092 17:03:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.092 17:03:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.092 17:03:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.092 17:03:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.092 17:03:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.092 17:03:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.092 17:03:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.092 17:03:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.092 17:03:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.092 17:03:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.092 17:03:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.092 17:03:50 -- paths/export.sh@5 -- # export PATH 00:16:35.092 17:03:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.092 17:03:50 -- nvmf/common.sh@47 -- # : 0 00:16:35.092 17:03:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.092 17:03:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.092 17:03:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.092 17:03:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.092 17:03:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.092 17:03:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.092 17:03:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.092 17:03:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.092 17:03:50 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:35.092 17:03:50 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:35.092 17:03:50 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:35.092 17:03:50 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:35.092 17:03:50 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:35.092 17:03:50 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:35.092 17:03:50 -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:35.092 17:03:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:35.092 17:03:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.092 17:03:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:35.092 17:03:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:35.092 17:03:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:35.092 17:03:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.092 17:03:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.092 17:03:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.092 17:03:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:35.092 17:03:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:35.092 17:03:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:35.092 17:03:50 -- common/autotest_common.sh@10 -- # set +x 00:16:36.995 17:03:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:36.995 17:03:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:36.995 17:03:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:36.995 17:03:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:36.995 17:03:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:36.995 17:03:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:36.995 17:03:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:36.995 17:03:52 -- nvmf/common.sh@295 -- # net_devs=() 00:16:36.995 17:03:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:36.995 17:03:52 -- nvmf/common.sh@296 -- # e810=() 00:16:36.995 17:03:52 -- nvmf/common.sh@296 -- # local -ga e810 00:16:36.996 17:03:52 -- nvmf/common.sh@297 -- # x722=() 00:16:36.996 17:03:52 -- nvmf/common.sh@297 -- # local -ga x722 00:16:36.996 17:03:52 -- nvmf/common.sh@298 -- # mlx=() 00:16:36.996 17:03:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:36.996 17:03:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.996 17:03:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.996 17:03:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.996 17:03:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.996 17:03:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.996 17:03:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.996 17:03:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.996 17:03:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.996 17:03:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.996 17:03:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.996 17:03:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.996 17:03:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:36.996 17:03:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:36.996 17:03:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:36.996 17:03:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.996 17:03:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:36.996 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:36.996 17:03:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.996 17:03:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:36.996 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:36.996 17:03:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:36.996 17:03:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.996 17:03:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.996 17:03:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:36.996 17:03:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.996 17:03:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:36.996 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:36.996 17:03:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.996 17:03:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.996 17:03:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.996 17:03:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:36.996 17:03:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.996 17:03:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:36.996 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:36.996 17:03:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.996 17:03:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:36.996 17:03:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:36.996 17:03:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:36.996 17:03:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:36.996 17:03:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.996 17:03:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.996 17:03:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.996 17:03:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:36.996 17:03:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.996 17:03:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.996 17:03:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:36.996 17:03:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.996 17:03:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.996 17:03:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:36.996 17:03:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:36.996 17:03:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.996 17:03:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.996 17:03:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.996 17:03:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.996 17:03:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:36.996 17:03:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.996 17:03:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.996 17:03:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:37.257 17:03:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:37.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:16:37.257 00:16:37.257 --- 10.0.0.2 ping statistics --- 00:16:37.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.257 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:16:37.257 17:03:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:37.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:16:37.257 00:16:37.257 --- 10.0.0.1 ping statistics --- 00:16:37.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.257 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:16:37.257 17:03:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.257 17:03:52 -- nvmf/common.sh@411 -- # return 0 00:16:37.257 17:03:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:37.257 17:03:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.257 17:03:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:37.257 17:03:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:37.257 17:03:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.257 17:03:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:37.257 17:03:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:37.257 17:03:52 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:37.257 17:03:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:37.257 17:03:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:37.257 17:03:52 -- common/autotest_common.sh@10 -- # set +x 00:16:37.257 17:03:52 -- nvmf/common.sh@470 -- # nvmfpid=1713991 00:16:37.257 17:03:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:37.257 17:03:52 -- nvmf/common.sh@471 -- # waitforlisten 1713991 00:16:37.257 17:03:52 -- common/autotest_common.sh@817 -- # '[' -z 1713991 ']' 00:16:37.257 17:03:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.257 17:03:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:37.257 17:03:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.257 17:03:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:37.257 17:03:52 -- common/autotest_common.sh@10 -- # set +x 00:16:37.257 [2024-04-18 17:03:52.780180] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:37.257 [2024-04-18 17:03:52.780260] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.257 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.257 [2024-04-18 17:03:52.848578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:37.257 [2024-04-18 17:03:52.957330] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.257 [2024-04-18 17:03:52.957420] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.257 [2024-04-18 17:03:52.957437] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.257 [2024-04-18 17:03:52.957449] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.257 [2024-04-18 17:03:52.957460] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.257 [2024-04-18 17:03:52.957573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.257 [2024-04-18 17:03:52.957627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.257 [2024-04-18 17:03:52.957630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.524 17:03:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:37.524 17:03:53 -- common/autotest_common.sh@850 -- # return 0 00:16:37.524 17:03:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:37.524 17:03:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:37.524 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.524 17:03:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.524 17:03:53 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:37.524 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.524 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.524 [2024-04-18 17:03:53.105236] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.524 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.524 17:03:53 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:37.524 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.524 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.524 Malloc0 00:16:37.524 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.524 17:03:53 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:37.524 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.524 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.524 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.524 17:03:53 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:37.524 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.524 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.524 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.524 17:03:53 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.524 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.524 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.525 [2024-04-18 17:03:53.175289] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.525 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.525 17:03:53 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:37.525 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.525 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.525 [2024-04-18 17:03:53.183191] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:37.525 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.525 17:03:53 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:37.525 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.525 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.525 Malloc1 00:16:37.525 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.525 17:03:53 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:37.525 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.525 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.525 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.525 17:03:53 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:37.525 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.525 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.819 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.819 17:03:53 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:37.819 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.819 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.819 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.819 17:03:53 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:37.819 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.819 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.819 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.819 17:03:53 -- host/multicontroller.sh@44 -- # bdevperf_pid=1714136 00:16:37.819 17:03:53 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:37.819 17:03:53 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:37.819 17:03:53 -- host/multicontroller.sh@47 -- # waitforlisten 1714136 /var/tmp/bdevperf.sock 00:16:37.819 17:03:53 -- common/autotest_common.sh@817 -- # '[' -z 1714136 ']' 00:16:37.819 17:03:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:37.819 17:03:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:37.819 17:03:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:37.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:37.819 17:03:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:37.819 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:38.078 17:03:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:38.078 17:03:53 -- common/autotest_common.sh@850 -- # return 0 00:16:38.078 17:03:53 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:38.078 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.078 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:38.336 NVMe0n1 00:16:38.336 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.336 17:03:53 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:38.336 17:03:53 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:38.336 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.336 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:38.336 17:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.336 1 00:16:38.336 17:03:53 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:38.336 17:03:53 -- common/autotest_common.sh@638 -- # local es=0 00:16:38.336 17:03:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:38.336 17:03:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:38.336 17:03:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:38.336 17:03:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:38.336 17:03:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:38.336 17:03:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:38.336 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.336 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:38.336 request: 00:16:38.336 { 00:16:38.336 "name": "NVMe0", 00:16:38.336 "trtype": "tcp", 00:16:38.336 "traddr": "10.0.0.2", 00:16:38.336 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:38.336 "hostaddr": "10.0.0.2", 00:16:38.336 "hostsvcid": "60000", 00:16:38.336 "adrfam": "ipv4", 00:16:38.336 "trsvcid": "4420", 00:16:38.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.336 "method": "bdev_nvme_attach_controller", 00:16:38.336 "req_id": 1 00:16:38.336 } 00:16:38.336 Got JSON-RPC error response 00:16:38.336 response: 00:16:38.336 { 00:16:38.336 "code": -114, 00:16:38.336 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:38.336 } 00:16:38.336 17:03:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:38.336 17:03:53 -- common/autotest_common.sh@641 -- # es=1 00:16:38.336 17:03:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:38.336 17:03:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:38.336 17:03:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:38.336 17:03:53 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:38.336 17:03:53 -- common/autotest_common.sh@638 -- # local es=0 00:16:38.336 17:03:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:38.336 17:03:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:38.336 17:03:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:38.336 17:03:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:38.336 17:03:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:38.336 17:03:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:38.336 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.336 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:38.336 request: 00:16:38.336 { 00:16:38.336 "name": "NVMe0", 00:16:38.336 "trtype": "tcp", 00:16:38.336 "traddr": "10.0.0.2", 00:16:38.336 "hostaddr": "10.0.0.2", 00:16:38.336 "hostsvcid": "60000", 00:16:38.336 "adrfam": "ipv4", 00:16:38.336 "trsvcid": "4420", 00:16:38.336 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:38.336 "method": "bdev_nvme_attach_controller", 00:16:38.336 "req_id": 1 00:16:38.336 } 00:16:38.336 Got JSON-RPC error response 00:16:38.336 response: 00:16:38.336 { 00:16:38.336 "code": -114, 00:16:38.336 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:38.336 } 00:16:38.336 17:03:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:38.336 17:03:53 -- common/autotest_common.sh@641 -- # es=1 00:16:38.336 17:03:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:38.336 17:03:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:38.336 17:03:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:38.336 17:03:53 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:38.336 17:03:53 -- common/autotest_common.sh@638 -- # local es=0 00:16:38.336 17:03:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:38.336 17:03:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:38.336 17:03:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:38.336 17:03:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:38.336 17:03:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:38.336 17:03:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:38.337 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.337 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:38.337 request: 00:16:38.337 { 00:16:38.337 "name": "NVMe0", 00:16:38.337 "trtype": "tcp", 00:16:38.337 "traddr": "10.0.0.2", 00:16:38.337 "hostaddr": "10.0.0.2", 00:16:38.337 "hostsvcid": "60000", 00:16:38.337 "adrfam": "ipv4", 00:16:38.337 "trsvcid": "4420", 00:16:38.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.337 "multipath": "disable", 00:16:38.337 "method": "bdev_nvme_attach_controller", 00:16:38.337 "req_id": 1 00:16:38.337 } 00:16:38.337 Got JSON-RPC error response 00:16:38.337 response: 00:16:38.337 { 00:16:38.337 "code": -114, 00:16:38.337 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:16:38.337 } 00:16:38.337 17:03:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:38.337 17:03:53 -- common/autotest_common.sh@641 -- # es=1 00:16:38.337 17:03:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:38.337 17:03:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:38.337 17:03:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:38.337 17:03:53 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:38.337 17:03:53 -- common/autotest_common.sh@638 -- # local es=0 00:16:38.337 17:03:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:38.337 17:03:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:38.337 17:03:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:38.337 17:03:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:38.337 17:03:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:38.337 17:03:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:38.337 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.337 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:38.337 request: 00:16:38.337 { 00:16:38.337 "name": "NVMe0", 00:16:38.337 "trtype": "tcp", 00:16:38.337 "traddr": "10.0.0.2", 00:16:38.337 "hostaddr": "10.0.0.2", 00:16:38.337 "hostsvcid": "60000", 00:16:38.337 "adrfam": "ipv4", 00:16:38.337 "trsvcid": "4420", 00:16:38.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.337 "multipath": "failover", 00:16:38.337 "method": "bdev_nvme_attach_controller", 00:16:38.337 "req_id": 1 00:16:38.337 } 00:16:38.337 Got JSON-RPC error response 00:16:38.337 response: 00:16:38.337 { 00:16:38.337 "code": -114, 00:16:38.337 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:38.337 } 00:16:38.337 17:03:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:38.337 17:03:53 -- common/autotest_common.sh@641 -- # es=1 00:16:38.337 17:03:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:38.337 17:03:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:38.337 17:03:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:38.337 17:03:53 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:38.337 17:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.337 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:38.595 00:16:38.595 17:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.595 17:03:54 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:38.595 17:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.595 17:03:54 -- common/autotest_common.sh@10 -- # set +x 00:16:38.595 17:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.595 17:03:54 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:38.595 17:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.595 17:03:54 -- common/autotest_common.sh@10 -- # set +x 00:16:38.595 00:16:38.595 17:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.595 17:03:54 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:38.595 17:03:54 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:38.595 17:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.595 17:03:54 -- common/autotest_common.sh@10 -- # set +x 00:16:38.595 17:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.595 17:03:54 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:38.595 17:03:54 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:39.975 0 00:16:39.975 17:03:55 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:39.975 17:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.975 17:03:55 -- common/autotest_common.sh@10 -- # set +x 00:16:39.975 17:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.975 17:03:55 -- host/multicontroller.sh@100 -- # killprocess 1714136 00:16:39.975 17:03:55 -- common/autotest_common.sh@936 -- # '[' -z 1714136 ']' 00:16:39.975 17:03:55 -- common/autotest_common.sh@940 -- # kill -0 1714136 00:16:39.975 17:03:55 -- common/autotest_common.sh@941 -- # uname 00:16:39.975 17:03:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:39.975 17:03:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1714136 00:16:39.975 17:03:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:39.975 17:03:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:39.975 17:03:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1714136' 00:16:39.975 killing process with pid 1714136 00:16:39.975 17:03:55 -- common/autotest_common.sh@955 -- # kill 1714136 00:16:39.975 17:03:55 -- common/autotest_common.sh@960 -- # wait 1714136 00:16:39.975 17:03:55 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:39.975 17:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.975 17:03:55 -- common/autotest_common.sh@10 -- # set +x 00:16:39.975 17:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.975 17:03:55 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:39.975 17:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.975 17:03:55 -- common/autotest_common.sh@10 -- # set +x 00:16:40.234 17:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.234 17:03:55 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:40.234 17:03:55 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:16:40.234 17:03:55 -- common/autotest_common.sh@1598 -- # read -r file 00:16:40.234 17:03:55 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:16:40.234 17:03:55 -- common/autotest_common.sh@1597 -- # sort -u 00:16:40.234 17:03:55 -- common/autotest_common.sh@1599 -- # cat 00:16:40.234 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:16:40.235 [2024-04-18 17:03:53.286085] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:40.235 [2024-04-18 17:03:53.286187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714136 ] 00:16:40.235 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.235 [2024-04-18 17:03:53.346309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.235 [2024-04-18 17:03:53.452528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.235 [2024-04-18 17:03:54.245104] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 698bd06c-2c3a-4d80-8ce6-7369ebf36825 already exists 00:16:40.235 [2024-04-18 17:03:54.245143] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:698bd06c-2c3a-4d80-8ce6-7369ebf36825 alias for bdev NVMe1n1 00:16:40.235 [2024-04-18 17:03:54.245170] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:40.235 Running I/O for 1 seconds... 00:16:40.235 00:16:40.235 Latency(us) 00:16:40.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.235 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:40.235 NVMe0n1 : 1.01 18909.31 73.86 0.00 0.00 6759.10 2051.03 12087.75 00:16:40.235 =================================================================================================================== 00:16:40.235 Total : 18909.31 73.86 0.00 0.00 6759.10 2051.03 12087.75 00:16:40.235 Received shutdown signal, test time was about 1.000000 seconds 00:16:40.235 00:16:40.235 Latency(us) 00:16:40.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.235 =================================================================================================================== 00:16:40.235 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:40.235 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:16:40.235 17:03:55 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:16:40.235 17:03:55 -- common/autotest_common.sh@1598 -- # read -r file 00:16:40.235 17:03:55 -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:40.235 17:03:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:40.235 17:03:55 -- nvmf/common.sh@117 -- # sync 00:16:40.235 17:03:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:40.235 17:03:55 -- nvmf/common.sh@120 -- # set +e 00:16:40.235 17:03:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:40.235 17:03:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:40.235 rmmod nvme_tcp 00:16:40.235 rmmod nvme_fabrics 00:16:40.235 rmmod nvme_keyring 00:16:40.235 17:03:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:40.235 17:03:55 -- nvmf/common.sh@124 -- # set -e 00:16:40.235 17:03:55 -- nvmf/common.sh@125 -- # return 0 00:16:40.235 17:03:55 -- nvmf/common.sh@478 -- # '[' -n 1713991 ']' 00:16:40.235 17:03:55 -- nvmf/common.sh@479 -- # killprocess 1713991 00:16:40.235 17:03:55 -- common/autotest_common.sh@936 -- # '[' -z 1713991 ']' 00:16:40.235 17:03:55 -- common/autotest_common.sh@940 -- # kill -0 1713991 00:16:40.235 17:03:55 -- common/autotest_common.sh@941 -- # uname 00:16:40.235 17:03:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.235 17:03:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1713991 00:16:40.235 17:03:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:40.235 17:03:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:40.235 17:03:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1713991' 00:16:40.235 killing process with pid 1713991 00:16:40.235 17:03:55 -- common/autotest_common.sh@955 -- # kill 1713991 00:16:40.235 17:03:55 -- common/autotest_common.sh@960 -- # wait 1713991 00:16:40.493 17:03:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:40.493 17:03:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:40.493 17:03:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:40.493 17:03:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.493 17:03:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:40.493 17:03:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.493 17:03:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.493 17:03:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.031 17:03:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:43.031 00:16:43.031 real 0m7.496s 00:16:43.031 user 0m12.249s 00:16:43.031 sys 0m2.195s 00:16:43.031 17:03:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.031 17:03:58 -- common/autotest_common.sh@10 -- # set +x 00:16:43.031 ************************************ 00:16:43.031 END TEST nvmf_multicontroller 00:16:43.031 ************************************ 00:16:43.031 17:03:58 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:43.031 17:03:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.031 17:03:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.031 17:03:58 -- common/autotest_common.sh@10 -- # set +x 00:16:43.031 ************************************ 00:16:43.031 START TEST nvmf_aer 00:16:43.031 ************************************ 00:16:43.031 17:03:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:43.031 * Looking for test storage... 00:16:43.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:43.031 17:03:58 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.031 17:03:58 -- nvmf/common.sh@7 -- # uname -s 00:16:43.031 17:03:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.031 17:03:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.031 17:03:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.031 17:03:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.031 17:03:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.031 17:03:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.031 17:03:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.031 17:03:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.031 17:03:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.031 17:03:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.031 17:03:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.031 17:03:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.031 17:03:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.031 17:03:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.031 17:03:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.031 17:03:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.032 17:03:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.032 17:03:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.032 17:03:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.032 17:03:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.032 17:03:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.032 17:03:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.032 17:03:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.032 17:03:58 -- paths/export.sh@5 -- # export PATH 00:16:43.032 17:03:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.032 17:03:58 -- nvmf/common.sh@47 -- # : 0 00:16:43.032 17:03:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.032 17:03:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.032 17:03:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.032 17:03:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.032 17:03:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.032 17:03:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.032 17:03:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.032 17:03:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.032 17:03:58 -- host/aer.sh@11 -- # nvmftestinit 00:16:43.032 17:03:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:43.032 17:03:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.032 17:03:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:43.032 17:03:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:43.032 17:03:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:43.032 17:03:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.032 17:03:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.032 17:03:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.032 17:03:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:43.032 17:03:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:43.032 17:03:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:43.032 17:03:58 -- common/autotest_common.sh@10 -- # set +x 00:16:44.935 17:04:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:44.935 17:04:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:44.935 17:04:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:44.935 17:04:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:44.935 17:04:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:44.935 17:04:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:44.935 17:04:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:44.935 17:04:00 -- nvmf/common.sh@295 -- # net_devs=() 00:16:44.935 17:04:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:44.935 17:04:00 -- nvmf/common.sh@296 -- # e810=() 00:16:44.935 17:04:00 -- nvmf/common.sh@296 -- # local -ga e810 00:16:44.935 17:04:00 -- nvmf/common.sh@297 -- # x722=() 00:16:44.935 17:04:00 -- nvmf/common.sh@297 -- # local -ga x722 00:16:44.935 17:04:00 -- nvmf/common.sh@298 -- # mlx=() 00:16:44.935 17:04:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:44.935 17:04:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.935 17:04:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.935 17:04:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.935 17:04:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.935 17:04:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.935 17:04:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.935 17:04:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.935 17:04:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.935 17:04:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.935 17:04:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.935 17:04:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.935 17:04:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:44.935 17:04:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:44.935 17:04:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:44.935 17:04:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.935 17:04:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:44.935 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:44.935 17:04:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.935 17:04:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:44.935 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:44.935 17:04:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:44.935 17:04:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.935 17:04:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.935 17:04:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:44.935 17:04:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.935 17:04:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:44.935 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:44.935 17:04:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.935 17:04:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.935 17:04:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.935 17:04:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:44.935 17:04:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.935 17:04:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:44.935 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:44.935 17:04:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.935 17:04:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:44.935 17:04:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:44.935 17:04:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:44.935 17:04:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.935 17:04:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.935 17:04:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.935 17:04:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:44.935 17:04:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.935 17:04:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.935 17:04:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:44.935 17:04:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.935 17:04:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.935 17:04:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:44.935 17:04:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:44.935 17:04:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.935 17:04:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.935 17:04:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.935 17:04:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.935 17:04:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:44.935 17:04:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.935 17:04:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.935 17:04:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.935 17:04:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:44.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:16:44.935 00:16:44.935 --- 10.0.0.2 ping statistics --- 00:16:44.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.935 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:16:44.935 17:04:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:16:44.935 00:16:44.935 --- 10.0.0.1 ping statistics --- 00:16:44.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.935 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:16:44.935 17:04:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.935 17:04:00 -- nvmf/common.sh@411 -- # return 0 00:16:44.935 17:04:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:44.935 17:04:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.935 17:04:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:44.935 17:04:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.935 17:04:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:44.936 17:04:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:44.936 17:04:00 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:44.936 17:04:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:44.936 17:04:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:44.936 17:04:00 -- common/autotest_common.sh@10 -- # set +x 00:16:44.936 17:04:00 -- nvmf/common.sh@470 -- # nvmfpid=1716363 00:16:44.936 17:04:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:44.936 17:04:00 -- nvmf/common.sh@471 -- # waitforlisten 1716363 00:16:44.936 17:04:00 -- common/autotest_common.sh@817 -- # '[' -z 1716363 ']' 00:16:44.936 17:04:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.936 17:04:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:44.936 17:04:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.936 17:04:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:44.936 17:04:00 -- common/autotest_common.sh@10 -- # set +x 00:16:44.936 [2024-04-18 17:04:00.481086] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:44.936 [2024-04-18 17:04:00.481157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.936 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.936 [2024-04-18 17:04:00.545401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.194 [2024-04-18 17:04:00.652647] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.194 [2024-04-18 17:04:00.652698] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.194 [2024-04-18 17:04:00.652721] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.194 [2024-04-18 17:04:00.652732] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.194 [2024-04-18 17:04:00.652743] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.194 [2024-04-18 17:04:00.652821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.194 [2024-04-18 17:04:00.652893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.194 [2024-04-18 17:04:00.652961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.194 [2024-04-18 17:04:00.652958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.194 17:04:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:45.194 17:04:00 -- common/autotest_common.sh@850 -- # return 0 00:16:45.194 17:04:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:45.194 17:04:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:45.194 17:04:00 -- common/autotest_common.sh@10 -- # set +x 00:16:45.194 17:04:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.194 17:04:00 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:45.194 17:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.194 17:04:00 -- common/autotest_common.sh@10 -- # set +x 00:16:45.194 [2024-04-18 17:04:00.811074] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.194 17:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.194 17:04:00 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:45.194 17:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.194 17:04:00 -- common/autotest_common.sh@10 -- # set +x 00:16:45.194 Malloc0 00:16:45.194 17:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.194 17:04:00 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:45.194 17:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.194 17:04:00 -- common/autotest_common.sh@10 -- # set +x 00:16:45.194 17:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.194 17:04:00 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:45.194 17:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.194 17:04:00 -- common/autotest_common.sh@10 -- # set +x 00:16:45.194 17:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.194 17:04:00 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.194 17:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.194 17:04:00 -- common/autotest_common.sh@10 -- # set +x 00:16:45.194 [2024-04-18 17:04:00.865018] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.194 17:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.194 17:04:00 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:45.194 17:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.194 17:04:00 -- common/autotest_common.sh@10 -- # set +x 00:16:45.194 [2024-04-18 17:04:00.872739] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:45.194 [ 00:16:45.194 { 00:16:45.194 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:45.194 "subtype": "Discovery", 00:16:45.194 "listen_addresses": [], 00:16:45.194 "allow_any_host": true, 00:16:45.194 "hosts": [] 00:16:45.194 }, 00:16:45.194 { 00:16:45.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.194 "subtype": "NVMe", 00:16:45.194 "listen_addresses": [ 00:16:45.194 { 00:16:45.194 "transport": "TCP", 00:16:45.194 "trtype": "TCP", 00:16:45.194 "adrfam": "IPv4", 00:16:45.194 "traddr": "10.0.0.2", 00:16:45.194 "trsvcid": "4420" 00:16:45.194 } 00:16:45.194 ], 00:16:45.194 "allow_any_host": true, 00:16:45.194 "hosts": [], 00:16:45.194 "serial_number": "SPDK00000000000001", 00:16:45.194 "model_number": "SPDK bdev Controller", 00:16:45.194 "max_namespaces": 2, 00:16:45.194 "min_cntlid": 1, 00:16:45.194 "max_cntlid": 65519, 00:16:45.194 "namespaces": [ 00:16:45.194 { 00:16:45.194 "nsid": 1, 00:16:45.194 "bdev_name": "Malloc0", 00:16:45.194 "name": "Malloc0", 00:16:45.194 "nguid": "A4FBAB64C8D34B1D9DB72D5AFD1D4D74", 00:16:45.194 "uuid": "a4fbab64-c8d3-4b1d-9db7-2d5afd1d4d74" 00:16:45.194 } 00:16:45.194 ] 00:16:45.194 } 00:16:45.194 ] 00:16:45.194 17:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.194 17:04:00 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:45.194 17:04:00 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:45.194 17:04:00 -- host/aer.sh@33 -- # aerpid=1716508 00:16:45.194 17:04:00 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:45.194 17:04:00 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:45.194 17:04:00 -- common/autotest_common.sh@1251 -- # local i=0 00:16:45.194 17:04:00 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:45.194 17:04:00 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:16:45.194 17:04:00 -- common/autotest_common.sh@1254 -- # i=1 00:16:45.194 17:04:00 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:45.454 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.454 17:04:00 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:45.454 17:04:00 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:16:45.454 17:04:00 -- common/autotest_common.sh@1254 -- # i=2 00:16:45.454 17:04:00 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:45.454 17:04:01 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:45.454 17:04:01 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:45.454 17:04:01 -- common/autotest_common.sh@1262 -- # return 0 00:16:45.454 17:04:01 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:45.454 17:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.454 17:04:01 -- common/autotest_common.sh@10 -- # set +x 00:16:45.454 Malloc1 00:16:45.454 17:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.454 17:04:01 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:45.454 17:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.454 17:04:01 -- common/autotest_common.sh@10 -- # set +x 00:16:45.454 17:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.454 17:04:01 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:45.454 17:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.454 17:04:01 -- common/autotest_common.sh@10 -- # set +x 00:16:45.714 Asynchronous Event Request test 00:16:45.714 Attaching to 10.0.0.2 00:16:45.714 Attached to 10.0.0.2 00:16:45.714 Registering asynchronous event callbacks... 00:16:45.714 Starting namespace attribute notice tests for all controllers... 00:16:45.714 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:45.714 aer_cb - Changed Namespace 00:16:45.714 Cleaning up... 00:16:45.714 [ 00:16:45.714 { 00:16:45.714 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:45.714 "subtype": "Discovery", 00:16:45.714 "listen_addresses": [], 00:16:45.714 "allow_any_host": true, 00:16:45.714 "hosts": [] 00:16:45.714 }, 00:16:45.714 { 00:16:45.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.714 "subtype": "NVMe", 00:16:45.714 "listen_addresses": [ 00:16:45.714 { 00:16:45.714 "transport": "TCP", 00:16:45.714 "trtype": "TCP", 00:16:45.714 "adrfam": "IPv4", 00:16:45.714 "traddr": "10.0.0.2", 00:16:45.715 "trsvcid": "4420" 00:16:45.715 } 00:16:45.715 ], 00:16:45.715 "allow_any_host": true, 00:16:45.715 "hosts": [], 00:16:45.715 "serial_number": "SPDK00000000000001", 00:16:45.715 "model_number": "SPDK bdev Controller", 00:16:45.715 "max_namespaces": 2, 00:16:45.715 "min_cntlid": 1, 00:16:45.715 "max_cntlid": 65519, 00:16:45.715 "namespaces": [ 00:16:45.715 { 00:16:45.715 "nsid": 1, 00:16:45.715 "bdev_name": "Malloc0", 00:16:45.715 "name": "Malloc0", 00:16:45.715 "nguid": "A4FBAB64C8D34B1D9DB72D5AFD1D4D74", 00:16:45.715 "uuid": "a4fbab64-c8d3-4b1d-9db7-2d5afd1d4d74" 00:16:45.715 }, 00:16:45.715 { 00:16:45.715 "nsid": 2, 00:16:45.715 "bdev_name": "Malloc1", 00:16:45.715 "name": "Malloc1", 00:16:45.715 "nguid": "D3871D9341BE4D949DBA9A08366080C3", 00:16:45.715 "uuid": "d3871d93-41be-4d94-9dba-9a08366080c3" 00:16:45.715 } 00:16:45.715 ] 00:16:45.715 } 00:16:45.715 ] 00:16:45.715 17:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.715 17:04:01 -- host/aer.sh@43 -- # wait 1716508 00:16:45.715 17:04:01 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:45.715 17:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.715 17:04:01 -- common/autotest_common.sh@10 -- # set +x 00:16:45.715 17:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.715 17:04:01 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:45.715 17:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.715 17:04:01 -- common/autotest_common.sh@10 -- # set +x 00:16:45.715 17:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.715 17:04:01 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.715 17:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.715 17:04:01 -- common/autotest_common.sh@10 -- # set +x 00:16:45.715 17:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.715 17:04:01 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:45.715 17:04:01 -- host/aer.sh@51 -- # nvmftestfini 00:16:45.715 17:04:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:45.715 17:04:01 -- nvmf/common.sh@117 -- # sync 00:16:45.715 17:04:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.715 17:04:01 -- nvmf/common.sh@120 -- # set +e 00:16:45.715 17:04:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.715 17:04:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.715 rmmod nvme_tcp 00:16:45.715 rmmod nvme_fabrics 00:16:45.715 rmmod nvme_keyring 00:16:45.715 17:04:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:45.715 17:04:01 -- nvmf/common.sh@124 -- # set -e 00:16:45.715 17:04:01 -- nvmf/common.sh@125 -- # return 0 00:16:45.715 17:04:01 -- nvmf/common.sh@478 -- # '[' -n 1716363 ']' 00:16:45.715 17:04:01 -- nvmf/common.sh@479 -- # killprocess 1716363 00:16:45.715 17:04:01 -- common/autotest_common.sh@936 -- # '[' -z 1716363 ']' 00:16:45.715 17:04:01 -- common/autotest_common.sh@940 -- # kill -0 1716363 00:16:45.715 17:04:01 -- common/autotest_common.sh@941 -- # uname 00:16:45.715 17:04:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.715 17:04:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1716363 00:16:45.715 17:04:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:45.715 17:04:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:45.715 17:04:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1716363' 00:16:45.715 killing process with pid 1716363 00:16:45.715 17:04:01 -- common/autotest_common.sh@955 -- # kill 1716363 00:16:45.715 [2024-04-18 17:04:01.312613] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:45.715 17:04:01 -- common/autotest_common.sh@960 -- # wait 1716363 00:16:45.975 17:04:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:45.975 17:04:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:45.975 17:04:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:45.975 17:04:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.975 17:04:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.975 17:04:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.975 17:04:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.975 17:04:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.510 17:04:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:48.510 00:16:48.510 real 0m5.392s 00:16:48.510 user 0m4.216s 00:16:48.510 sys 0m1.883s 00:16:48.510 17:04:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:48.510 17:04:03 -- common/autotest_common.sh@10 -- # set +x 00:16:48.510 ************************************ 00:16:48.510 END TEST nvmf_aer 00:16:48.510 ************************************ 00:16:48.510 17:04:03 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:48.510 17:04:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:48.510 17:04:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:48.510 17:04:03 -- common/autotest_common.sh@10 -- # set +x 00:16:48.510 ************************************ 00:16:48.510 START TEST nvmf_async_init 00:16:48.510 ************************************ 00:16:48.510 17:04:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:48.510 * Looking for test storage... 00:16:48.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:48.510 17:04:03 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.510 17:04:03 -- nvmf/common.sh@7 -- # uname -s 00:16:48.510 17:04:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.510 17:04:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.510 17:04:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.510 17:04:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.510 17:04:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.510 17:04:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.510 17:04:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.510 17:04:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.510 17:04:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.510 17:04:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.510 17:04:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.510 17:04:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.510 17:04:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.510 17:04:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.510 17:04:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.510 17:04:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.510 17:04:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:48.510 17:04:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.510 17:04:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.510 17:04:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.510 17:04:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.510 17:04:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.510 17:04:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.510 17:04:03 -- paths/export.sh@5 -- # export PATH 00:16:48.510 17:04:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.510 17:04:03 -- nvmf/common.sh@47 -- # : 0 00:16:48.510 17:04:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:48.510 17:04:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:48.510 17:04:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.510 17:04:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.510 17:04:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.510 17:04:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:48.510 17:04:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:48.510 17:04:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:48.510 17:04:03 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:48.510 17:04:03 -- host/async_init.sh@14 -- # null_block_size=512 00:16:48.510 17:04:03 -- host/async_init.sh@15 -- # null_bdev=null0 00:16:48.510 17:04:03 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:48.510 17:04:03 -- host/async_init.sh@20 -- # uuidgen 00:16:48.510 17:04:03 -- host/async_init.sh@20 -- # tr -d - 00:16:48.510 17:04:03 -- host/async_init.sh@20 -- # nguid=fc94def49d18411ba9d9239609778cec 00:16:48.510 17:04:03 -- host/async_init.sh@22 -- # nvmftestinit 00:16:48.510 17:04:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:48.510 17:04:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.510 17:04:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:48.511 17:04:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:48.511 17:04:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:48.511 17:04:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.511 17:04:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.511 17:04:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.511 17:04:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:48.511 17:04:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:48.511 17:04:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:48.511 17:04:03 -- common/autotest_common.sh@10 -- # set +x 00:16:50.412 17:04:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:50.412 17:04:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:50.412 17:04:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:50.412 17:04:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:50.412 17:04:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:50.412 17:04:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:50.412 17:04:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:50.412 17:04:05 -- nvmf/common.sh@295 -- # net_devs=() 00:16:50.412 17:04:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:50.412 17:04:05 -- nvmf/common.sh@296 -- # e810=() 00:16:50.412 17:04:05 -- nvmf/common.sh@296 -- # local -ga e810 00:16:50.412 17:04:05 -- nvmf/common.sh@297 -- # x722=() 00:16:50.412 17:04:05 -- nvmf/common.sh@297 -- # local -ga x722 00:16:50.412 17:04:05 -- nvmf/common.sh@298 -- # mlx=() 00:16:50.412 17:04:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:50.412 17:04:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.412 17:04:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.412 17:04:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.412 17:04:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.412 17:04:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.412 17:04:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.412 17:04:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.412 17:04:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.412 17:04:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.412 17:04:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.412 17:04:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.412 17:04:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:50.412 17:04:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:50.412 17:04:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:50.412 17:04:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:50.412 17:04:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:50.412 17:04:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:50.413 17:04:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.413 17:04:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:50.413 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:50.413 17:04:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.413 17:04:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:50.413 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:50.413 17:04:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:50.413 17:04:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.413 17:04:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.413 17:04:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:50.413 17:04:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.413 17:04:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:50.413 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:50.413 17:04:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.413 17:04:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.413 17:04:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.413 17:04:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:50.413 17:04:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.413 17:04:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:50.413 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:50.413 17:04:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.413 17:04:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:50.413 17:04:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:50.413 17:04:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:50.413 17:04:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.413 17:04:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.413 17:04:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:50.413 17:04:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:50.413 17:04:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:50.413 17:04:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:50.413 17:04:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:50.413 17:04:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:50.413 17:04:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.413 17:04:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:50.413 17:04:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:50.413 17:04:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:50.413 17:04:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:50.413 17:04:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:50.413 17:04:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:50.413 17:04:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:50.413 17:04:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:50.413 17:04:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:50.413 17:04:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:50.413 17:04:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:50.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:16:50.413 00:16:50.413 --- 10.0.0.2 ping statistics --- 00:16:50.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.413 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:16:50.413 17:04:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:50.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:16:50.413 00:16:50.413 --- 10.0.0.1 ping statistics --- 00:16:50.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.413 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:16:50.413 17:04:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.413 17:04:05 -- nvmf/common.sh@411 -- # return 0 00:16:50.413 17:04:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:50.413 17:04:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.413 17:04:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:50.413 17:04:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.413 17:04:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:50.413 17:04:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:50.413 17:04:05 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:50.413 17:04:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:50.413 17:04:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:50.413 17:04:05 -- common/autotest_common.sh@10 -- # set +x 00:16:50.413 17:04:05 -- nvmf/common.sh@470 -- # nvmfpid=1718454 00:16:50.413 17:04:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:50.413 17:04:05 -- nvmf/common.sh@471 -- # waitforlisten 1718454 00:16:50.413 17:04:05 -- common/autotest_common.sh@817 -- # '[' -z 1718454 ']' 00:16:50.413 17:04:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.413 17:04:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:50.413 17:04:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.413 17:04:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:50.413 17:04:05 -- common/autotest_common.sh@10 -- # set +x 00:16:50.413 [2024-04-18 17:04:06.031094] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:50.413 [2024-04-18 17:04:06.031192] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.413 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.413 [2024-04-18 17:04:06.099783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.672 [2024-04-18 17:04:06.213253] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.672 [2024-04-18 17:04:06.213318] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.672 [2024-04-18 17:04:06.213343] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.672 [2024-04-18 17:04:06.213356] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.672 [2024-04-18 17:04:06.213368] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.672 [2024-04-18 17:04:06.213410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.609 17:04:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:51.609 17:04:06 -- common/autotest_common.sh@850 -- # return 0 00:16:51.609 17:04:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:51.609 17:04:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:51.609 17:04:06 -- common/autotest_common.sh@10 -- # set +x 00:16:51.609 17:04:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.609 17:04:06 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:16:51.609 17:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.609 17:04:06 -- common/autotest_common.sh@10 -- # set +x 00:16:51.609 [2024-04-18 17:04:06.988080] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.609 17:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.609 17:04:06 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:51.609 17:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.609 17:04:06 -- common/autotest_common.sh@10 -- # set +x 00:16:51.609 null0 00:16:51.609 17:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.609 17:04:07 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:51.609 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.609 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.609 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.609 17:04:07 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:51.609 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.609 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.609 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.610 17:04:07 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fc94def49d18411ba9d9239609778cec 00:16:51.610 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.610 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.610 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.610 17:04:07 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:51.610 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.610 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.610 [2024-04-18 17:04:07.028336] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.610 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.610 17:04:07 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:51.610 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.610 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.610 nvme0n1 00:16:51.610 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.610 17:04:07 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:51.610 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.610 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.610 [ 00:16:51.610 { 00:16:51.610 "name": "nvme0n1", 00:16:51.610 "aliases": [ 00:16:51.610 "fc94def4-9d18-411b-a9d9-239609778cec" 00:16:51.610 ], 00:16:51.610 "product_name": "NVMe disk", 00:16:51.610 "block_size": 512, 00:16:51.610 "num_blocks": 2097152, 00:16:51.610 "uuid": "fc94def4-9d18-411b-a9d9-239609778cec", 00:16:51.610 "assigned_rate_limits": { 00:16:51.610 "rw_ios_per_sec": 0, 00:16:51.610 "rw_mbytes_per_sec": 0, 00:16:51.610 "r_mbytes_per_sec": 0, 00:16:51.610 "w_mbytes_per_sec": 0 00:16:51.610 }, 00:16:51.610 "claimed": false, 00:16:51.610 "zoned": false, 00:16:51.610 "supported_io_types": { 00:16:51.610 "read": true, 00:16:51.610 "write": true, 00:16:51.610 "unmap": false, 00:16:51.610 "write_zeroes": true, 00:16:51.610 "flush": true, 00:16:51.610 "reset": true, 00:16:51.610 "compare": true, 00:16:51.610 "compare_and_write": true, 00:16:51.610 "abort": true, 00:16:51.610 "nvme_admin": true, 00:16:51.610 "nvme_io": true 00:16:51.610 }, 00:16:51.610 "memory_domains": [ 00:16:51.610 { 00:16:51.610 "dma_device_id": "system", 00:16:51.610 "dma_device_type": 1 00:16:51.610 } 00:16:51.610 ], 00:16:51.610 "driver_specific": { 00:16:51.610 "nvme": [ 00:16:51.610 { 00:16:51.610 "trid": { 00:16:51.610 "trtype": "TCP", 00:16:51.610 "adrfam": "IPv4", 00:16:51.610 "traddr": "10.0.0.2", 00:16:51.610 "trsvcid": "4420", 00:16:51.610 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:51.610 }, 00:16:51.610 "ctrlr_data": { 00:16:51.610 "cntlid": 1, 00:16:51.610 "vendor_id": "0x8086", 00:16:51.610 "model_number": "SPDK bdev Controller", 00:16:51.610 "serial_number": "00000000000000000000", 00:16:51.610 "firmware_revision": "24.05", 00:16:51.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:51.610 "oacs": { 00:16:51.610 "security": 0, 00:16:51.610 "format": 0, 00:16:51.610 "firmware": 0, 00:16:51.610 "ns_manage": 0 00:16:51.610 }, 00:16:51.610 "multi_ctrlr": true, 00:16:51.610 "ana_reporting": false 00:16:51.610 }, 00:16:51.610 "vs": { 00:16:51.610 "nvme_version": "1.3" 00:16:51.610 }, 00:16:51.610 "ns_data": { 00:16:51.610 "id": 1, 00:16:51.610 "can_share": true 00:16:51.610 } 00:16:51.610 } 00:16:51.610 ], 00:16:51.610 "mp_policy": "active_passive" 00:16:51.610 } 00:16:51.610 } 00:16:51.610 ] 00:16:51.610 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.610 17:04:07 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:51.610 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.610 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.610 [2024-04-18 17:04:07.281004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:51.610 [2024-04-18 17:04:07.281092] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2c9f0 (9): Bad file descriptor 00:16:51.869 [2024-04-18 17:04:07.423535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:51.869 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.869 17:04:07 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:51.869 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.869 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.869 [ 00:16:51.869 { 00:16:51.869 "name": "nvme0n1", 00:16:51.869 "aliases": [ 00:16:51.869 "fc94def4-9d18-411b-a9d9-239609778cec" 00:16:51.869 ], 00:16:51.869 "product_name": "NVMe disk", 00:16:51.869 "block_size": 512, 00:16:51.869 "num_blocks": 2097152, 00:16:51.869 "uuid": "fc94def4-9d18-411b-a9d9-239609778cec", 00:16:51.869 "assigned_rate_limits": { 00:16:51.869 "rw_ios_per_sec": 0, 00:16:51.869 "rw_mbytes_per_sec": 0, 00:16:51.869 "r_mbytes_per_sec": 0, 00:16:51.869 "w_mbytes_per_sec": 0 00:16:51.869 }, 00:16:51.869 "claimed": false, 00:16:51.869 "zoned": false, 00:16:51.869 "supported_io_types": { 00:16:51.869 "read": true, 00:16:51.869 "write": true, 00:16:51.869 "unmap": false, 00:16:51.869 "write_zeroes": true, 00:16:51.869 "flush": true, 00:16:51.869 "reset": true, 00:16:51.869 "compare": true, 00:16:51.869 "compare_and_write": true, 00:16:51.869 "abort": true, 00:16:51.869 "nvme_admin": true, 00:16:51.869 "nvme_io": true 00:16:51.869 }, 00:16:51.869 "memory_domains": [ 00:16:51.869 { 00:16:51.869 "dma_device_id": "system", 00:16:51.869 "dma_device_type": 1 00:16:51.869 } 00:16:51.869 ], 00:16:51.869 "driver_specific": { 00:16:51.869 "nvme": [ 00:16:51.869 { 00:16:51.869 "trid": { 00:16:51.869 "trtype": "TCP", 00:16:51.869 "adrfam": "IPv4", 00:16:51.869 "traddr": "10.0.0.2", 00:16:51.869 "trsvcid": "4420", 00:16:51.869 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:51.869 }, 00:16:51.869 "ctrlr_data": { 00:16:51.869 "cntlid": 2, 00:16:51.869 "vendor_id": "0x8086", 00:16:51.869 "model_number": "SPDK bdev Controller", 00:16:51.869 "serial_number": "00000000000000000000", 00:16:51.869 "firmware_revision": "24.05", 00:16:51.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:51.869 "oacs": { 00:16:51.869 "security": 0, 00:16:51.869 "format": 0, 00:16:51.869 "firmware": 0, 00:16:51.869 "ns_manage": 0 00:16:51.869 }, 00:16:51.869 "multi_ctrlr": true, 00:16:51.869 "ana_reporting": false 00:16:51.869 }, 00:16:51.869 "vs": { 00:16:51.869 "nvme_version": "1.3" 00:16:51.869 }, 00:16:51.869 "ns_data": { 00:16:51.869 "id": 1, 00:16:51.869 "can_share": true 00:16:51.869 } 00:16:51.869 } 00:16:51.869 ], 00:16:51.869 "mp_policy": "active_passive" 00:16:51.869 } 00:16:51.869 } 00:16:51.869 ] 00:16:51.869 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.869 17:04:07 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.869 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.869 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.869 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.869 17:04:07 -- host/async_init.sh@53 -- # mktemp 00:16:51.869 17:04:07 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.dHPrrKC3Ss 00:16:51.869 17:04:07 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:51.869 17:04:07 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.dHPrrKC3Ss 00:16:51.869 17:04:07 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:51.869 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.869 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.869 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.869 17:04:07 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:16:51.869 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.869 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.869 [2024-04-18 17:04:07.477659] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:51.869 [2024-04-18 17:04:07.477808] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:51.869 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.869 17:04:07 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dHPrrKC3Ss 00:16:51.869 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.869 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.869 [2024-04-18 17:04:07.485704] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:51.869 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.869 17:04:07 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dHPrrKC3Ss 00:16:51.869 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.869 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.869 [2024-04-18 17:04:07.493703] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:51.869 [2024-04-18 17:04:07.493772] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:51.869 nvme0n1 00:16:51.869 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.869 17:04:07 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:51.869 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.869 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:52.129 [ 00:16:52.129 { 00:16:52.129 "name": "nvme0n1", 00:16:52.129 "aliases": [ 00:16:52.129 "fc94def4-9d18-411b-a9d9-239609778cec" 00:16:52.129 ], 00:16:52.129 "product_name": "NVMe disk", 00:16:52.129 "block_size": 512, 00:16:52.129 "num_blocks": 2097152, 00:16:52.129 "uuid": "fc94def4-9d18-411b-a9d9-239609778cec", 00:16:52.129 "assigned_rate_limits": { 00:16:52.129 "rw_ios_per_sec": 0, 00:16:52.129 "rw_mbytes_per_sec": 0, 00:16:52.129 "r_mbytes_per_sec": 0, 00:16:52.129 "w_mbytes_per_sec": 0 00:16:52.129 }, 00:16:52.129 "claimed": false, 00:16:52.129 "zoned": false, 00:16:52.129 "supported_io_types": { 00:16:52.129 "read": true, 00:16:52.129 "write": true, 00:16:52.129 "unmap": false, 00:16:52.129 "write_zeroes": true, 00:16:52.129 "flush": true, 00:16:52.129 "reset": true, 00:16:52.129 "compare": true, 00:16:52.129 "compare_and_write": true, 00:16:52.129 "abort": true, 00:16:52.129 "nvme_admin": true, 00:16:52.129 "nvme_io": true 00:16:52.129 }, 00:16:52.129 "memory_domains": [ 00:16:52.129 { 00:16:52.129 "dma_device_id": "system", 00:16:52.129 "dma_device_type": 1 00:16:52.129 } 00:16:52.129 ], 00:16:52.129 "driver_specific": { 00:16:52.129 "nvme": [ 00:16:52.129 { 00:16:52.129 "trid": { 00:16:52.129 "trtype": "TCP", 00:16:52.129 "adrfam": "IPv4", 00:16:52.129 "traddr": "10.0.0.2", 00:16:52.129 "trsvcid": "4421", 00:16:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:52.129 }, 00:16:52.129 "ctrlr_data": { 00:16:52.129 "cntlid": 3, 00:16:52.129 "vendor_id": "0x8086", 00:16:52.129 "model_number": "SPDK bdev Controller", 00:16:52.129 "serial_number": "00000000000000000000", 00:16:52.129 "firmware_revision": "24.05", 00:16:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:52.129 "oacs": { 00:16:52.129 "security": 0, 00:16:52.129 "format": 0, 00:16:52.129 "firmware": 0, 00:16:52.129 "ns_manage": 0 00:16:52.129 }, 00:16:52.129 "multi_ctrlr": true, 00:16:52.129 "ana_reporting": false 00:16:52.129 }, 00:16:52.129 "vs": { 00:16:52.129 "nvme_version": "1.3" 00:16:52.129 }, 00:16:52.129 "ns_data": { 00:16:52.129 "id": 1, 00:16:52.129 "can_share": true 00:16:52.129 } 00:16:52.129 } 00:16:52.129 ], 00:16:52.129 "mp_policy": "active_passive" 00:16:52.129 } 00:16:52.129 } 00:16:52.129 ] 00:16:52.129 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.129 17:04:07 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.129 17:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.129 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:16:52.129 17:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.129 17:04:07 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.dHPrrKC3Ss 00:16:52.129 17:04:07 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:52.129 17:04:07 -- host/async_init.sh@78 -- # nvmftestfini 00:16:52.129 17:04:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:52.129 17:04:07 -- nvmf/common.sh@117 -- # sync 00:16:52.129 17:04:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:52.129 17:04:07 -- nvmf/common.sh@120 -- # set +e 00:16:52.129 17:04:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:52.129 17:04:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:52.129 rmmod nvme_tcp 00:16:52.129 rmmod nvme_fabrics 00:16:52.129 rmmod nvme_keyring 00:16:52.129 17:04:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:52.129 17:04:07 -- nvmf/common.sh@124 -- # set -e 00:16:52.129 17:04:07 -- nvmf/common.sh@125 -- # return 0 00:16:52.129 17:04:07 -- nvmf/common.sh@478 -- # '[' -n 1718454 ']' 00:16:52.129 17:04:07 -- nvmf/common.sh@479 -- # killprocess 1718454 00:16:52.129 17:04:07 -- common/autotest_common.sh@936 -- # '[' -z 1718454 ']' 00:16:52.129 17:04:07 -- common/autotest_common.sh@940 -- # kill -0 1718454 00:16:52.129 17:04:07 -- common/autotest_common.sh@941 -- # uname 00:16:52.129 17:04:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:52.129 17:04:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1718454 00:16:52.129 17:04:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:52.129 17:04:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:52.129 17:04:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1718454' 00:16:52.129 killing process with pid 1718454 00:16:52.129 17:04:07 -- common/autotest_common.sh@955 -- # kill 1718454 00:16:52.129 [2024-04-18 17:04:07.663372] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:52.129 [2024-04-18 17:04:07.663417] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:52.129 17:04:07 -- common/autotest_common.sh@960 -- # wait 1718454 00:16:52.388 17:04:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:52.388 17:04:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:52.388 17:04:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:52.388 17:04:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:52.388 17:04:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:52.388 17:04:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.388 17:04:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.388 17:04:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.293 17:04:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:54.293 00:16:54.293 real 0m6.218s 00:16:54.293 user 0m2.930s 00:16:54.293 sys 0m1.876s 00:16:54.293 17:04:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:54.293 17:04:09 -- common/autotest_common.sh@10 -- # set +x 00:16:54.293 ************************************ 00:16:54.293 END TEST nvmf_async_init 00:16:54.293 ************************************ 00:16:54.293 17:04:09 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:54.293 17:04:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:54.293 17:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:54.293 17:04:09 -- common/autotest_common.sh@10 -- # set +x 00:16:54.551 ************************************ 00:16:54.551 START TEST dma 00:16:54.551 ************************************ 00:16:54.551 17:04:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:54.551 * Looking for test storage... 00:16:54.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:54.551 17:04:10 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.551 17:04:10 -- nvmf/common.sh@7 -- # uname -s 00:16:54.551 17:04:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.551 17:04:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.551 17:04:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.551 17:04:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.551 17:04:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.551 17:04:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.551 17:04:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.551 17:04:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.551 17:04:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.551 17:04:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.551 17:04:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.551 17:04:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.551 17:04:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.551 17:04:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.551 17:04:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.551 17:04:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.551 17:04:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.551 17:04:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.551 17:04:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.551 17:04:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.551 17:04:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.551 17:04:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.551 17:04:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.551 17:04:10 -- paths/export.sh@5 -- # export PATH 00:16:54.551 17:04:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.551 17:04:10 -- nvmf/common.sh@47 -- # : 0 00:16:54.551 17:04:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.551 17:04:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.551 17:04:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.551 17:04:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.551 17:04:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.551 17:04:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.551 17:04:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.551 17:04:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.551 17:04:10 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:16:54.551 17:04:10 -- host/dma.sh@13 -- # exit 0 00:16:54.551 00:16:54.551 real 0m0.059s 00:16:54.551 user 0m0.027s 00:16:54.551 sys 0m0.037s 00:16:54.551 17:04:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:54.551 17:04:10 -- common/autotest_common.sh@10 -- # set +x 00:16:54.551 ************************************ 00:16:54.551 END TEST dma 00:16:54.551 ************************************ 00:16:54.551 17:04:10 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:54.551 17:04:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:54.551 17:04:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:54.551 17:04:10 -- common/autotest_common.sh@10 -- # set +x 00:16:54.810 ************************************ 00:16:54.810 START TEST nvmf_identify 00:16:54.810 ************************************ 00:16:54.810 17:04:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:54.810 * Looking for test storage... 00:16:54.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:54.810 17:04:10 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.810 17:04:10 -- nvmf/common.sh@7 -- # uname -s 00:16:54.810 17:04:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.810 17:04:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.810 17:04:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.810 17:04:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.810 17:04:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.810 17:04:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.810 17:04:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.810 17:04:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.810 17:04:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.810 17:04:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.810 17:04:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.810 17:04:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.810 17:04:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.810 17:04:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.810 17:04:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.810 17:04:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.810 17:04:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.810 17:04:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.810 17:04:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.810 17:04:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.810 17:04:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.810 17:04:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.810 17:04:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.810 17:04:10 -- paths/export.sh@5 -- # export PATH 00:16:54.810 17:04:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.810 17:04:10 -- nvmf/common.sh@47 -- # : 0 00:16:54.810 17:04:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.810 17:04:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.810 17:04:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.810 17:04:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.810 17:04:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.810 17:04:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.810 17:04:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.810 17:04:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.810 17:04:10 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:54.810 17:04:10 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:54.810 17:04:10 -- host/identify.sh@14 -- # nvmftestinit 00:16:54.810 17:04:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:54.810 17:04:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.810 17:04:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:54.810 17:04:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:54.810 17:04:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:54.810 17:04:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.810 17:04:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.810 17:04:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.810 17:04:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:54.810 17:04:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:54.810 17:04:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:54.810 17:04:10 -- common/autotest_common.sh@10 -- # set +x 00:16:56.716 17:04:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:56.716 17:04:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:56.716 17:04:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:56.716 17:04:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:56.716 17:04:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:56.716 17:04:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:56.716 17:04:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:56.716 17:04:12 -- nvmf/common.sh@295 -- # net_devs=() 00:16:56.716 17:04:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:56.716 17:04:12 -- nvmf/common.sh@296 -- # e810=() 00:16:56.716 17:04:12 -- nvmf/common.sh@296 -- # local -ga e810 00:16:56.716 17:04:12 -- nvmf/common.sh@297 -- # x722=() 00:16:56.716 17:04:12 -- nvmf/common.sh@297 -- # local -ga x722 00:16:56.716 17:04:12 -- nvmf/common.sh@298 -- # mlx=() 00:16:56.717 17:04:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:56.717 17:04:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.717 17:04:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.717 17:04:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.717 17:04:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.717 17:04:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.717 17:04:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.717 17:04:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.717 17:04:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.717 17:04:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.717 17:04:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.717 17:04:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.717 17:04:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:56.717 17:04:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:56.717 17:04:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:56.717 17:04:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.717 17:04:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:56.717 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:56.717 17:04:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.717 17:04:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:56.717 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:56.717 17:04:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:56.717 17:04:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.717 17:04:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.717 17:04:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:56.717 17:04:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.717 17:04:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:56.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:56.717 17:04:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.717 17:04:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.717 17:04:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.717 17:04:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:56.717 17:04:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.717 17:04:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:56.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:56.717 17:04:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.717 17:04:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:56.717 17:04:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:56.717 17:04:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:56.717 17:04:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.717 17:04:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.717 17:04:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.717 17:04:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:56.717 17:04:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.717 17:04:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.717 17:04:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:56.717 17:04:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.717 17:04:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.717 17:04:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:56.717 17:04:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:56.717 17:04:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.717 17:04:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.717 17:04:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.717 17:04:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.717 17:04:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:56.717 17:04:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.717 17:04:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.717 17:04:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.717 17:04:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:56.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:16:56.717 00:16:56.717 --- 10.0.0.2 ping statistics --- 00:16:56.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.717 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:16:56.717 17:04:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:16:56.717 00:16:56.717 --- 10.0.0.1 ping statistics --- 00:16:56.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.717 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:16:56.717 17:04:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.717 17:04:12 -- nvmf/common.sh@411 -- # return 0 00:16:56.717 17:04:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:56.717 17:04:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.717 17:04:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:56.717 17:04:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.717 17:04:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:56.717 17:04:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:56.717 17:04:12 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:56.717 17:04:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:56.717 17:04:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.717 17:04:12 -- host/identify.sh@19 -- # nvmfpid=1720720 00:16:56.717 17:04:12 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:56.717 17:04:12 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:56.717 17:04:12 -- host/identify.sh@23 -- # waitforlisten 1720720 00:16:56.717 17:04:12 -- common/autotest_common.sh@817 -- # '[' -z 1720720 ']' 00:16:56.717 17:04:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.717 17:04:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:56.717 17:04:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.717 17:04:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:56.717 17:04:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.717 [2024-04-18 17:04:12.392616] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:56.717 [2024-04-18 17:04:12.392704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.975 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.975 [2024-04-18 17:04:12.463528] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.975 [2024-04-18 17:04:12.579320] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.975 [2024-04-18 17:04:12.579391] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.975 [2024-04-18 17:04:12.579414] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.975 [2024-04-18 17:04:12.579428] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.975 [2024-04-18 17:04:12.579465] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.975 [2024-04-18 17:04:12.579557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.975 [2024-04-18 17:04:12.579807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.975 [2024-04-18 17:04:12.579868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.975 [2024-04-18 17:04:12.579865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.914 17:04:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:57.914 17:04:13 -- common/autotest_common.sh@850 -- # return 0 00:16:57.914 17:04:13 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:57.914 17:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.914 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:16:57.914 [2024-04-18 17:04:13.351205] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.914 17:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.914 17:04:13 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:57.914 17:04:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:57.914 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:16:57.914 17:04:13 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:57.914 17:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.914 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:16:57.914 Malloc0 00:16:57.914 17:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.914 17:04:13 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:57.914 17:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.914 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:16:57.914 17:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.914 17:04:13 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:57.914 17:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.914 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:16:57.914 17:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.914 17:04:13 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.914 17:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.914 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:16:57.914 [2024-04-18 17:04:13.422023] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.914 17:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.914 17:04:13 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:57.914 17:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.914 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:16:57.914 17:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.914 17:04:13 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:57.914 17:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.914 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:16:57.914 [2024-04-18 17:04:13.437790] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:57.914 [ 00:16:57.914 { 00:16:57.914 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:57.914 "subtype": "Discovery", 00:16:57.914 "listen_addresses": [ 00:16:57.914 { 00:16:57.914 "transport": "TCP", 00:16:57.914 "trtype": "TCP", 00:16:57.914 "adrfam": "IPv4", 00:16:57.914 "traddr": "10.0.0.2", 00:16:57.914 "trsvcid": "4420" 00:16:57.914 } 00:16:57.914 ], 00:16:57.914 "allow_any_host": true, 00:16:57.914 "hosts": [] 00:16:57.914 }, 00:16:57.914 { 00:16:57.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.914 "subtype": "NVMe", 00:16:57.914 "listen_addresses": [ 00:16:57.914 { 00:16:57.914 "transport": "TCP", 00:16:57.914 "trtype": "TCP", 00:16:57.914 "adrfam": "IPv4", 00:16:57.914 "traddr": "10.0.0.2", 00:16:57.914 "trsvcid": "4420" 00:16:57.914 } 00:16:57.914 ], 00:16:57.914 "allow_any_host": true, 00:16:57.914 "hosts": [], 00:16:57.914 "serial_number": "SPDK00000000000001", 00:16:57.914 "model_number": "SPDK bdev Controller", 00:16:57.914 "max_namespaces": 32, 00:16:57.914 "min_cntlid": 1, 00:16:57.914 "max_cntlid": 65519, 00:16:57.914 "namespaces": [ 00:16:57.914 { 00:16:57.914 "nsid": 1, 00:16:57.914 "bdev_name": "Malloc0", 00:16:57.914 "name": "Malloc0", 00:16:57.914 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:57.914 "eui64": "ABCDEF0123456789", 00:16:57.914 "uuid": "eac31345-f186-409c-aa7c-9e579017c961" 00:16:57.914 } 00:16:57.914 ] 00:16:57.914 } 00:16:57.914 ] 00:16:57.914 17:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.914 17:04:13 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:57.914 [2024-04-18 17:04:13.459585] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:57.914 [2024-04-18 17:04:13.459621] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720874 ] 00:16:57.914 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.914 [2024-04-18 17:04:13.490527] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:57.914 [2024-04-18 17:04:13.490586] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:57.914 [2024-04-18 17:04:13.490596] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:57.914 [2024-04-18 17:04:13.490611] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:57.914 [2024-04-18 17:04:13.490624] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:57.914 [2024-04-18 17:04:13.494423] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:57.914 [2024-04-18 17:04:13.494484] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a0fd00 0 00:16:57.914 [2024-04-18 17:04:13.502398] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:57.914 [2024-04-18 17:04:13.502417] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:57.915 [2024-04-18 17:04:13.502426] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:57.915 [2024-04-18 17:04:13.502432] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:57.915 [2024-04-18 17:04:13.502494] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.502507] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.502514] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0fd00) 00:16:57.915 [2024-04-18 17:04:13.502531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:57.915 [2024-04-18 17:04:13.502557] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6eec0, cid 0, qid 0 00:16:57.915 [2024-04-18 17:04:13.509393] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.915 [2024-04-18 17:04:13.509413] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.915 [2024-04-18 17:04:13.509421] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.509429] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6eec0) on tqpair=0x1a0fd00 00:16:57.915 [2024-04-18 17:04:13.509450] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:57.915 [2024-04-18 17:04:13.509462] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:57.915 [2024-04-18 17:04:13.509471] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:57.915 [2024-04-18 17:04:13.509492] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.509500] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.509507] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0fd00) 00:16:57.915 [2024-04-18 17:04:13.509518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.915 [2024-04-18 17:04:13.509541] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6eec0, cid 0, qid 0 00:16:57.915 [2024-04-18 17:04:13.509669] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.915 [2024-04-18 17:04:13.509684] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.915 [2024-04-18 17:04:13.509691] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.509698] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6eec0) on tqpair=0x1a0fd00 00:16:57.915 [2024-04-18 17:04:13.509708] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:57.915 [2024-04-18 17:04:13.509721] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:57.915 [2024-04-18 17:04:13.509733] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.509740] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.509746] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0fd00) 00:16:57.915 [2024-04-18 17:04:13.509757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.915 [2024-04-18 17:04:13.509778] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6eec0, cid 0, qid 0 00:16:57.915 [2024-04-18 17:04:13.509877] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.915 [2024-04-18 17:04:13.509892] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.915 [2024-04-18 17:04:13.509899] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.509906] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6eec0) on tqpair=0x1a0fd00 00:16:57.915 [2024-04-18 17:04:13.509916] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:57.915 [2024-04-18 17:04:13.509929] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:57.915 [2024-04-18 17:04:13.509941] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.509948] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.509954] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0fd00) 00:16:57.915 [2024-04-18 17:04:13.509965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.915 [2024-04-18 17:04:13.509986] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6eec0, cid 0, qid 0 00:16:57.915 [2024-04-18 17:04:13.510080] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.915 [2024-04-18 17:04:13.510092] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.915 [2024-04-18 17:04:13.510099] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.510106] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6eec0) on tqpair=0x1a0fd00 00:16:57.915 [2024-04-18 17:04:13.510116] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:57.915 [2024-04-18 17:04:13.510131] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.510140] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.510146] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0fd00) 00:16:57.915 [2024-04-18 17:04:13.510157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.915 [2024-04-18 17:04:13.510177] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6eec0, cid 0, qid 0 00:16:57.915 [2024-04-18 17:04:13.510275] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.915 [2024-04-18 17:04:13.510287] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.915 [2024-04-18 17:04:13.510298] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.510305] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6eec0) on tqpair=0x1a0fd00 00:16:57.915 [2024-04-18 17:04:13.510315] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:57.915 [2024-04-18 17:04:13.510323] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:57.915 [2024-04-18 17:04:13.510336] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:57.915 [2024-04-18 17:04:13.510446] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:57.915 [2024-04-18 17:04:13.510456] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:57.915 [2024-04-18 17:04:13.510469] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.510477] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.510483] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0fd00) 00:16:57.915 [2024-04-18 17:04:13.510493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.915 [2024-04-18 17:04:13.510515] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6eec0, cid 0, qid 0 00:16:57.915 [2024-04-18 17:04:13.510625] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.915 [2024-04-18 17:04:13.510638] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.915 [2024-04-18 17:04:13.510645] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.510652] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6eec0) on tqpair=0x1a0fd00 00:16:57.915 [2024-04-18 17:04:13.510661] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:57.915 [2024-04-18 17:04:13.510677] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.510685] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.510691] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0fd00) 00:16:57.915 [2024-04-18 17:04:13.510702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.915 [2024-04-18 17:04:13.510722] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6eec0, cid 0, qid 0 00:16:57.915 [2024-04-18 17:04:13.510825] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.915 [2024-04-18 17:04:13.510840] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.915 [2024-04-18 17:04:13.510847] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.510853] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6eec0) on tqpair=0x1a0fd00 00:16:57.915 [2024-04-18 17:04:13.510862] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:57.915 [2024-04-18 17:04:13.510870] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:57.915 [2024-04-18 17:04:13.510883] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:57.915 [2024-04-18 17:04:13.510901] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:57.915 [2024-04-18 17:04:13.510919] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.510930] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0fd00) 00:16:57.915 [2024-04-18 17:04:13.510942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.915 [2024-04-18 17:04:13.510963] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6eec0, cid 0, qid 0 00:16:57.915 [2024-04-18 17:04:13.511101] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:57.915 [2024-04-18 17:04:13.511113] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:57.915 [2024-04-18 17:04:13.511120] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.511127] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a0fd00): datao=0, datal=4096, cccid=0 00:16:57.915 [2024-04-18 17:04:13.511135] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a6eec0) on tqpair(0x1a0fd00): expected_datao=0, payload_size=4096 00:16:57.915 [2024-04-18 17:04:13.511143] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.511153] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.511162] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.511174] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.915 [2024-04-18 17:04:13.511184] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.915 [2024-04-18 17:04:13.511190] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.915 [2024-04-18 17:04:13.511197] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6eec0) on tqpair=0x1a0fd00 00:16:57.915 [2024-04-18 17:04:13.511210] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:57.915 [2024-04-18 17:04:13.511219] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:57.916 [2024-04-18 17:04:13.511227] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:57.916 [2024-04-18 17:04:13.511236] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:57.916 [2024-04-18 17:04:13.511243] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:57.916 [2024-04-18 17:04:13.511251] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:57.916 [2024-04-18 17:04:13.511265] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:57.916 [2024-04-18 17:04:13.511277] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511284] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511291] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0fd00) 00:16:57.916 [2024-04-18 17:04:13.511302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:57.916 [2024-04-18 17:04:13.511322] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6eec0, cid 0, qid 0 00:16:57.916 [2024-04-18 17:04:13.511448] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.916 [2024-04-18 17:04:13.511461] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.916 [2024-04-18 17:04:13.511468] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511475] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6eec0) on tqpair=0x1a0fd00 00:16:57.916 [2024-04-18 17:04:13.511487] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511495] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511501] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0fd00) 00:16:57.916 [2024-04-18 17:04:13.511515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.916 [2024-04-18 17:04:13.511526] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511533] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511539] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a0fd00) 00:16:57.916 [2024-04-18 17:04:13.511548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.916 [2024-04-18 17:04:13.511558] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511564] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511571] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a0fd00) 00:16:57.916 [2024-04-18 17:04:13.511579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.916 [2024-04-18 17:04:13.511589] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511595] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511602] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0fd00) 00:16:57.916 [2024-04-18 17:04:13.511610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.916 [2024-04-18 17:04:13.511619] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:57.916 [2024-04-18 17:04:13.511637] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:57.916 [2024-04-18 17:04:13.511649] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511656] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a0fd00) 00:16:57.916 [2024-04-18 17:04:13.511666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.916 [2024-04-18 17:04:13.511704] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6eec0, cid 0, qid 0 00:16:57.916 [2024-04-18 17:04:13.511715] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f020, cid 1, qid 0 00:16:57.916 [2024-04-18 17:04:13.511723] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f180, cid 2, qid 0 00:16:57.916 [2024-04-18 17:04:13.511730] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f2e0, cid 3, qid 0 00:16:57.916 [2024-04-18 17:04:13.511738] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f440, cid 4, qid 0 00:16:57.916 [2024-04-18 17:04:13.511888] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.916 [2024-04-18 17:04:13.511903] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.916 [2024-04-18 17:04:13.511910] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511917] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6f440) on tqpair=0x1a0fd00 00:16:57.916 [2024-04-18 17:04:13.511928] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:57.916 [2024-04-18 17:04:13.511936] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:57.916 [2024-04-18 17:04:13.511954] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.511963] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a0fd00) 00:16:57.916 [2024-04-18 17:04:13.511974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.916 [2024-04-18 17:04:13.511998] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f440, cid 4, qid 0 00:16:57.916 [2024-04-18 17:04:13.512112] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:57.916 [2024-04-18 17:04:13.512127] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:57.916 [2024-04-18 17:04:13.512135] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.512141] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a0fd00): datao=0, datal=4096, cccid=4 00:16:57.916 [2024-04-18 17:04:13.512149] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a6f440) on tqpair(0x1a0fd00): expected_datao=0, payload_size=4096 00:16:57.916 [2024-04-18 17:04:13.512156] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.512172] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.512181] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.554393] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.916 [2024-04-18 17:04:13.554412] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.916 [2024-04-18 17:04:13.554419] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.554426] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6f440) on tqpair=0x1a0fd00 00:16:57.916 [2024-04-18 17:04:13.554447] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:57.916 [2024-04-18 17:04:13.554492] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.554502] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a0fd00) 00:16:57.916 [2024-04-18 17:04:13.554513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.916 [2024-04-18 17:04:13.554525] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.554532] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.554538] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a0fd00) 00:16:57.916 [2024-04-18 17:04:13.554547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.916 [2024-04-18 17:04:13.554576] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f440, cid 4, qid 0 00:16:57.916 [2024-04-18 17:04:13.554588] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f5a0, cid 5, qid 0 00:16:57.916 [2024-04-18 17:04:13.554733] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:57.916 [2024-04-18 17:04:13.554746] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:57.916 [2024-04-18 17:04:13.554753] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.554759] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a0fd00): datao=0, datal=1024, cccid=4 00:16:57.916 [2024-04-18 17:04:13.554767] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a6f440) on tqpair(0x1a0fd00): expected_datao=0, payload_size=1024 00:16:57.916 [2024-04-18 17:04:13.554775] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.554785] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.554792] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.554801] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.916 [2024-04-18 17:04:13.554810] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.916 [2024-04-18 17:04:13.554816] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.554823] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6f5a0) on tqpair=0x1a0fd00 00:16:57.916 [2024-04-18 17:04:13.595522] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.916 [2024-04-18 17:04:13.595542] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.916 [2024-04-18 17:04:13.595554] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.595562] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6f440) on tqpair=0x1a0fd00 00:16:57.916 [2024-04-18 17:04:13.595582] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.595591] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a0fd00) 00:16:57.916 [2024-04-18 17:04:13.595602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.916 [2024-04-18 17:04:13.595632] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f440, cid 4, qid 0 00:16:57.916 [2024-04-18 17:04:13.595843] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:57.916 [2024-04-18 17:04:13.595856] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:57.916 [2024-04-18 17:04:13.595863] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.595869] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a0fd00): datao=0, datal=3072, cccid=4 00:16:57.916 [2024-04-18 17:04:13.595877] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a6f440) on tqpair(0x1a0fd00): expected_datao=0, payload_size=3072 00:16:57.916 [2024-04-18 17:04:13.595884] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.595895] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.595902] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:57.916 [2024-04-18 17:04:13.595914] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:57.916 [2024-04-18 17:04:13.595924] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:57.917 [2024-04-18 17:04:13.595930] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:57.917 [2024-04-18 17:04:13.595937] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6f440) on tqpair=0x1a0fd00 00:16:57.917 [2024-04-18 17:04:13.595953] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:57.917 [2024-04-18 17:04:13.595962] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a0fd00) 00:16:57.917 [2024-04-18 17:04:13.595973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.917 [2024-04-18 17:04:13.596000] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f440, cid 4, qid 0 00:16:57.917 [2024-04-18 17:04:13.596123] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:57.917 [2024-04-18 17:04:13.596135] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:57.917 [2024-04-18 17:04:13.596142] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:57.917 [2024-04-18 17:04:13.596149] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a0fd00): datao=0, datal=8, cccid=4 00:16:57.917 [2024-04-18 17:04:13.596156] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a6f440) on tqpair(0x1a0fd00): expected_datao=0, payload_size=8 00:16:57.917 [2024-04-18 17:04:13.596164] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:57.917 [2024-04-18 17:04:13.596173] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:57.917 [2024-04-18 17:04:13.596181] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.223 [2024-04-18 17:04:13.636473] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.223 [2024-04-18 17:04:13.636491] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.223 [2024-04-18 17:04:13.636498] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.223 [2024-04-18 17:04:13.636505] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6f440) on tqpair=0x1a0fd00 00:16:58.223 ===================================================== 00:16:58.224 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:58.224 ===================================================== 00:16:58.224 Controller Capabilities/Features 00:16:58.224 ================================ 00:16:58.224 Vendor ID: 0000 00:16:58.224 Subsystem Vendor ID: 0000 00:16:58.224 Serial Number: .................... 00:16:58.224 Model Number: ........................................ 00:16:58.224 Firmware Version: 24.05 00:16:58.224 Recommended Arb Burst: 0 00:16:58.224 IEEE OUI Identifier: 00 00 00 00:16:58.224 Multi-path I/O 00:16:58.224 May have multiple subsystem ports: No 00:16:58.224 May have multiple controllers: No 00:16:58.224 Associated with SR-IOV VF: No 00:16:58.224 Max Data Transfer Size: 131072 00:16:58.224 Max Number of Namespaces: 0 00:16:58.224 Max Number of I/O Queues: 1024 00:16:58.224 NVMe Specification Version (VS): 1.3 00:16:58.224 NVMe Specification Version (Identify): 1.3 00:16:58.224 Maximum Queue Entries: 128 00:16:58.224 Contiguous Queues Required: Yes 00:16:58.224 Arbitration Mechanisms Supported 00:16:58.224 Weighted Round Robin: Not Supported 00:16:58.224 Vendor Specific: Not Supported 00:16:58.224 Reset Timeout: 15000 ms 00:16:58.224 Doorbell Stride: 4 bytes 00:16:58.224 NVM Subsystem Reset: Not Supported 00:16:58.224 Command Sets Supported 00:16:58.224 NVM Command Set: Supported 00:16:58.224 Boot Partition: Not Supported 00:16:58.224 Memory Page Size Minimum: 4096 bytes 00:16:58.224 Memory Page Size Maximum: 4096 bytes 00:16:58.224 Persistent Memory Region: Not Supported 00:16:58.224 Optional Asynchronous Events Supported 00:16:58.224 Namespace Attribute Notices: Not Supported 00:16:58.224 Firmware Activation Notices: Not Supported 00:16:58.224 ANA Change Notices: Not Supported 00:16:58.224 PLE Aggregate Log Change Notices: Not Supported 00:16:58.224 LBA Status Info Alert Notices: Not Supported 00:16:58.224 EGE Aggregate Log Change Notices: Not Supported 00:16:58.224 Normal NVM Subsystem Shutdown event: Not Supported 00:16:58.224 Zone Descriptor Change Notices: Not Supported 00:16:58.224 Discovery Log Change Notices: Supported 00:16:58.224 Controller Attributes 00:16:58.224 128-bit Host Identifier: Not Supported 00:16:58.224 Non-Operational Permissive Mode: Not Supported 00:16:58.224 NVM Sets: Not Supported 00:16:58.224 Read Recovery Levels: Not Supported 00:16:58.224 Endurance Groups: Not Supported 00:16:58.224 Predictable Latency Mode: Not Supported 00:16:58.224 Traffic Based Keep ALive: Not Supported 00:16:58.224 Namespace Granularity: Not Supported 00:16:58.224 SQ Associations: Not Supported 00:16:58.224 UUID List: Not Supported 00:16:58.224 Multi-Domain Subsystem: Not Supported 00:16:58.224 Fixed Capacity Management: Not Supported 00:16:58.224 Variable Capacity Management: Not Supported 00:16:58.224 Delete Endurance Group: Not Supported 00:16:58.224 Delete NVM Set: Not Supported 00:16:58.224 Extended LBA Formats Supported: Not Supported 00:16:58.224 Flexible Data Placement Supported: Not Supported 00:16:58.224 00:16:58.224 Controller Memory Buffer Support 00:16:58.224 ================================ 00:16:58.224 Supported: No 00:16:58.224 00:16:58.224 Persistent Memory Region Support 00:16:58.224 ================================ 00:16:58.224 Supported: No 00:16:58.224 00:16:58.224 Admin Command Set Attributes 00:16:58.224 ============================ 00:16:58.224 Security Send/Receive: Not Supported 00:16:58.224 Format NVM: Not Supported 00:16:58.224 Firmware Activate/Download: Not Supported 00:16:58.224 Namespace Management: Not Supported 00:16:58.224 Device Self-Test: Not Supported 00:16:58.224 Directives: Not Supported 00:16:58.224 NVMe-MI: Not Supported 00:16:58.224 Virtualization Management: Not Supported 00:16:58.224 Doorbell Buffer Config: Not Supported 00:16:58.224 Get LBA Status Capability: Not Supported 00:16:58.224 Command & Feature Lockdown Capability: Not Supported 00:16:58.224 Abort Command Limit: 1 00:16:58.224 Async Event Request Limit: 4 00:16:58.224 Number of Firmware Slots: N/A 00:16:58.224 Firmware Slot 1 Read-Only: N/A 00:16:58.224 Firmware Activation Without Reset: N/A 00:16:58.224 Multiple Update Detection Support: N/A 00:16:58.224 Firmware Update Granularity: No Information Provided 00:16:58.224 Per-Namespace SMART Log: No 00:16:58.224 Asymmetric Namespace Access Log Page: Not Supported 00:16:58.224 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:58.224 Command Effects Log Page: Not Supported 00:16:58.224 Get Log Page Extended Data: Supported 00:16:58.224 Telemetry Log Pages: Not Supported 00:16:58.224 Persistent Event Log Pages: Not Supported 00:16:58.224 Supported Log Pages Log Page: May Support 00:16:58.224 Commands Supported & Effects Log Page: Not Supported 00:16:58.224 Feature Identifiers & Effects Log Page:May Support 00:16:58.224 NVMe-MI Commands & Effects Log Page: May Support 00:16:58.224 Data Area 4 for Telemetry Log: Not Supported 00:16:58.224 Error Log Page Entries Supported: 128 00:16:58.224 Keep Alive: Not Supported 00:16:58.224 00:16:58.224 NVM Command Set Attributes 00:16:58.224 ========================== 00:16:58.224 Submission Queue Entry Size 00:16:58.224 Max: 1 00:16:58.224 Min: 1 00:16:58.224 Completion Queue Entry Size 00:16:58.224 Max: 1 00:16:58.224 Min: 1 00:16:58.224 Number of Namespaces: 0 00:16:58.224 Compare Command: Not Supported 00:16:58.224 Write Uncorrectable Command: Not Supported 00:16:58.224 Dataset Management Command: Not Supported 00:16:58.224 Write Zeroes Command: Not Supported 00:16:58.224 Set Features Save Field: Not Supported 00:16:58.224 Reservations: Not Supported 00:16:58.224 Timestamp: Not Supported 00:16:58.224 Copy: Not Supported 00:16:58.224 Volatile Write Cache: Not Present 00:16:58.224 Atomic Write Unit (Normal): 1 00:16:58.224 Atomic Write Unit (PFail): 1 00:16:58.224 Atomic Compare & Write Unit: 1 00:16:58.224 Fused Compare & Write: Supported 00:16:58.224 Scatter-Gather List 00:16:58.224 SGL Command Set: Supported 00:16:58.224 SGL Keyed: Supported 00:16:58.224 SGL Bit Bucket Descriptor: Not Supported 00:16:58.224 SGL Metadata Pointer: Not Supported 00:16:58.224 Oversized SGL: Not Supported 00:16:58.224 SGL Metadata Address: Not Supported 00:16:58.224 SGL Offset: Supported 00:16:58.224 Transport SGL Data Block: Not Supported 00:16:58.224 Replay Protected Memory Block: Not Supported 00:16:58.224 00:16:58.224 Firmware Slot Information 00:16:58.224 ========================= 00:16:58.224 Active slot: 0 00:16:58.224 00:16:58.224 00:16:58.224 Error Log 00:16:58.224 ========= 00:16:58.224 00:16:58.224 Active Namespaces 00:16:58.224 ================= 00:16:58.224 Discovery Log Page 00:16:58.224 ================== 00:16:58.224 Generation Counter: 2 00:16:58.224 Number of Records: 2 00:16:58.224 Record Format: 0 00:16:58.224 00:16:58.224 Discovery Log Entry 0 00:16:58.224 ---------------------- 00:16:58.224 Transport Type: 3 (TCP) 00:16:58.224 Address Family: 1 (IPv4) 00:16:58.224 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:58.224 Entry Flags: 00:16:58.224 Duplicate Returned Information: 1 00:16:58.224 Explicit Persistent Connection Support for Discovery: 1 00:16:58.224 Transport Requirements: 00:16:58.224 Secure Channel: Not Required 00:16:58.224 Port ID: 0 (0x0000) 00:16:58.224 Controller ID: 65535 (0xffff) 00:16:58.224 Admin Max SQ Size: 128 00:16:58.224 Transport Service Identifier: 4420 00:16:58.224 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:58.224 Transport Address: 10.0.0.2 00:16:58.224 Discovery Log Entry 1 00:16:58.224 ---------------------- 00:16:58.224 Transport Type: 3 (TCP) 00:16:58.224 Address Family: 1 (IPv4) 00:16:58.224 Subsystem Type: 2 (NVM Subsystem) 00:16:58.224 Entry Flags: 00:16:58.224 Duplicate Returned Information: 0 00:16:58.224 Explicit Persistent Connection Support for Discovery: 0 00:16:58.225 Transport Requirements: 00:16:58.225 Secure Channel: Not Required 00:16:58.225 Port ID: 0 (0x0000) 00:16:58.225 Controller ID: 65535 (0xffff) 00:16:58.225 Admin Max SQ Size: 128 00:16:58.225 Transport Service Identifier: 4420 00:16:58.225 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:58.225 Transport Address: 10.0.0.2 [2024-04-18 17:04:13.636617] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:58.225 [2024-04-18 17:04:13.636641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.225 [2024-04-18 17:04:13.636657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.225 [2024-04-18 17:04:13.636667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.225 [2024-04-18 17:04:13.636676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.225 [2024-04-18 17:04:13.636689] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.636697] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.636703] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0fd00) 00:16:58.225 [2024-04-18 17:04:13.636715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.225 [2024-04-18 17:04:13.636738] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f2e0, cid 3, qid 0 00:16:58.225 [2024-04-18 17:04:13.636847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.225 [2024-04-18 17:04:13.636862] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.225 [2024-04-18 17:04:13.636869] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.636876] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6f2e0) on tqpair=0x1a0fd00 00:16:58.225 [2024-04-18 17:04:13.636889] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.636897] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.636903] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0fd00) 00:16:58.225 [2024-04-18 17:04:13.636913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.225 [2024-04-18 17:04:13.636939] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f2e0, cid 3, qid 0 00:16:58.225 [2024-04-18 17:04:13.637082] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.225 [2024-04-18 17:04:13.637097] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.225 [2024-04-18 17:04:13.637104] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.637110] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6f2e0) on tqpair=0x1a0fd00 00:16:58.225 [2024-04-18 17:04:13.637120] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:58.225 [2024-04-18 17:04:13.637128] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:58.225 [2024-04-18 17:04:13.637143] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.637152] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.637158] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0fd00) 00:16:58.225 [2024-04-18 17:04:13.637169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.225 [2024-04-18 17:04:13.637189] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f2e0, cid 3, qid 0 00:16:58.225 [2024-04-18 17:04:13.637300] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.225 [2024-04-18 17:04:13.637313] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.225 [2024-04-18 17:04:13.637319] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.637326] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6f2e0) on tqpair=0x1a0fd00 00:16:58.225 [2024-04-18 17:04:13.637343] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.637351] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.637358] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0fd00) 00:16:58.225 [2024-04-18 17:04:13.637372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.225 [2024-04-18 17:04:13.641406] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a6f2e0, cid 3, qid 0 00:16:58.225 [2024-04-18 17:04:13.641542] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.225 [2024-04-18 17:04:13.641558] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.225 [2024-04-18 17:04:13.641565] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.641572] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a6f2e0) on tqpair=0x1a0fd00 00:16:58.225 [2024-04-18 17:04:13.641587] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:16:58.225 00:16:58.225 17:04:13 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:58.225 [2024-04-18 17:04:13.672775] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:58.225 [2024-04-18 17:04:13.672817] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720876 ] 00:16:58.225 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.225 [2024-04-18 17:04:13.706086] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:58.225 [2024-04-18 17:04:13.706133] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:58.225 [2024-04-18 17:04:13.706143] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:58.225 [2024-04-18 17:04:13.706156] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:58.225 [2024-04-18 17:04:13.706167] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:58.225 [2024-04-18 17:04:13.706448] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:58.225 [2024-04-18 17:04:13.706491] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1aa2d00 0 00:16:58.225 [2024-04-18 17:04:13.713393] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:58.225 [2024-04-18 17:04:13.713412] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:58.225 [2024-04-18 17:04:13.713421] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:58.225 [2024-04-18 17:04:13.713427] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:58.225 [2024-04-18 17:04:13.713478] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.713491] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.713498] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aa2d00) 00:16:58.225 [2024-04-18 17:04:13.713512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:58.225 [2024-04-18 17:04:13.713538] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b01ec0, cid 0, qid 0 00:16:58.225 [2024-04-18 17:04:13.720409] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.225 [2024-04-18 17:04:13.720427] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.225 [2024-04-18 17:04:13.720435] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.720442] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b01ec0) on tqpair=0x1aa2d00 00:16:58.225 [2024-04-18 17:04:13.720466] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:58.225 [2024-04-18 17:04:13.720489] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:58.225 [2024-04-18 17:04:13.720499] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:58.225 [2024-04-18 17:04:13.720517] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.720525] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.720532] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aa2d00) 00:16:58.225 [2024-04-18 17:04:13.720543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.225 [2024-04-18 17:04:13.720567] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b01ec0, cid 0, qid 0 00:16:58.225 [2024-04-18 17:04:13.720709] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.225 [2024-04-18 17:04:13.720724] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.225 [2024-04-18 17:04:13.720731] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.720738] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b01ec0) on tqpair=0x1aa2d00 00:16:58.225 [2024-04-18 17:04:13.720747] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:58.225 [2024-04-18 17:04:13.720761] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:58.225 [2024-04-18 17:04:13.720773] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.720781] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.225 [2024-04-18 17:04:13.720787] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aa2d00) 00:16:58.226 [2024-04-18 17:04:13.720798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.226 [2024-04-18 17:04:13.720819] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b01ec0, cid 0, qid 0 00:16:58.226 [2024-04-18 17:04:13.720924] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.226 [2024-04-18 17:04:13.720939] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.226 [2024-04-18 17:04:13.720946] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.720953] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b01ec0) on tqpair=0x1aa2d00 00:16:58.226 [2024-04-18 17:04:13.720963] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:58.226 [2024-04-18 17:04:13.720977] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:58.226 [2024-04-18 17:04:13.720989] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.720996] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.721002] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aa2d00) 00:16:58.226 [2024-04-18 17:04:13.721013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.226 [2024-04-18 17:04:13.721033] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b01ec0, cid 0, qid 0 00:16:58.226 [2024-04-18 17:04:13.721133] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.226 [2024-04-18 17:04:13.721147] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.226 [2024-04-18 17:04:13.721154] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.721161] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b01ec0) on tqpair=0x1aa2d00 00:16:58.226 [2024-04-18 17:04:13.721171] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:58.226 [2024-04-18 17:04:13.721192] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.721202] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.721208] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aa2d00) 00:16:58.226 [2024-04-18 17:04:13.721219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.226 [2024-04-18 17:04:13.721239] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b01ec0, cid 0, qid 0 00:16:58.226 [2024-04-18 17:04:13.721337] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.226 [2024-04-18 17:04:13.721349] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.226 [2024-04-18 17:04:13.721356] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.721363] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b01ec0) on tqpair=0x1aa2d00 00:16:58.226 [2024-04-18 17:04:13.721371] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:58.226 [2024-04-18 17:04:13.721387] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:58.226 [2024-04-18 17:04:13.721403] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:58.226 [2024-04-18 17:04:13.721513] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:58.226 [2024-04-18 17:04:13.721520] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:58.226 [2024-04-18 17:04:13.721531] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.721539] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.721545] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aa2d00) 00:16:58.226 [2024-04-18 17:04:13.721556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.226 [2024-04-18 17:04:13.721577] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b01ec0, cid 0, qid 0 00:16:58.226 [2024-04-18 17:04:13.721717] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.226 [2024-04-18 17:04:13.721729] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.226 [2024-04-18 17:04:13.721736] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.721742] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b01ec0) on tqpair=0x1aa2d00 00:16:58.226 [2024-04-18 17:04:13.721752] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:58.226 [2024-04-18 17:04:13.721768] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.721776] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.721783] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aa2d00) 00:16:58.226 [2024-04-18 17:04:13.721793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.226 [2024-04-18 17:04:13.721813] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b01ec0, cid 0, qid 0 00:16:58.226 [2024-04-18 17:04:13.721913] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.226 [2024-04-18 17:04:13.721928] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.226 [2024-04-18 17:04:13.721935] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.721942] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b01ec0) on tqpair=0x1aa2d00 00:16:58.226 [2024-04-18 17:04:13.721955] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:58.226 [2024-04-18 17:04:13.721964] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:58.226 [2024-04-18 17:04:13.721977] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:58.226 [2024-04-18 17:04:13.721994] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:58.226 [2024-04-18 17:04:13.722010] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.722019] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aa2d00) 00:16:58.226 [2024-04-18 17:04:13.722030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.226 [2024-04-18 17:04:13.722051] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b01ec0, cid 0, qid 0 00:16:58.226 [2024-04-18 17:04:13.722201] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.226 [2024-04-18 17:04:13.722216] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.226 [2024-04-18 17:04:13.722223] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.722229] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aa2d00): datao=0, datal=4096, cccid=0 00:16:58.226 [2024-04-18 17:04:13.722237] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b01ec0) on tqpair(0x1aa2d00): expected_datao=0, payload_size=4096 00:16:58.226 [2024-04-18 17:04:13.722245] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.722255] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.722263] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.722274] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.226 [2024-04-18 17:04:13.722285] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.226 [2024-04-18 17:04:13.722291] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.722298] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b01ec0) on tqpair=0x1aa2d00 00:16:58.226 [2024-04-18 17:04:13.722310] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:58.226 [2024-04-18 17:04:13.722319] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:58.226 [2024-04-18 17:04:13.722326] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:58.226 [2024-04-18 17:04:13.722333] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:58.226 [2024-04-18 17:04:13.722341] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:58.226 [2024-04-18 17:04:13.722348] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:58.226 [2024-04-18 17:04:13.722362] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:58.226 [2024-04-18 17:04:13.722374] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.722389] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.226 [2024-04-18 17:04:13.722396] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aa2d00) 00:16:58.227 [2024-04-18 17:04:13.722407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.227 [2024-04-18 17:04:13.722439] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b01ec0, cid 0, qid 0 00:16:58.227 [2024-04-18 17:04:13.722546] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.227 [2024-04-18 17:04:13.722559] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.227 [2024-04-18 17:04:13.722566] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.722572] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b01ec0) on tqpair=0x1aa2d00 00:16:58.227 [2024-04-18 17:04:13.722584] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.722591] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.722597] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1aa2d00) 00:16:58.227 [2024-04-18 17:04:13.722607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.227 [2024-04-18 17:04:13.722617] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.722624] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.722631] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1aa2d00) 00:16:58.227 [2024-04-18 17:04:13.722639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.227 [2024-04-18 17:04:13.722649] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.722655] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.722662] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1aa2d00) 00:16:58.227 [2024-04-18 17:04:13.722670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.227 [2024-04-18 17:04:13.722680] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.722687] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.722693] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aa2d00) 00:16:58.227 [2024-04-18 17:04:13.722702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.227 [2024-04-18 17:04:13.722710] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:58.227 [2024-04-18 17:04:13.722743] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:58.227 [2024-04-18 17:04:13.722755] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.722762] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aa2d00) 00:16:58.227 [2024-04-18 17:04:13.722773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.227 [2024-04-18 17:04:13.722794] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b01ec0, cid 0, qid 0 00:16:58.227 [2024-04-18 17:04:13.722820] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b02020, cid 1, qid 0 00:16:58.227 [2024-04-18 17:04:13.722828] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b02180, cid 2, qid 0 00:16:58.227 [2024-04-18 17:04:13.722836] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b022e0, cid 3, qid 0 00:16:58.227 [2024-04-18 17:04:13.722843] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b02440, cid 4, qid 0 00:16:58.227 [2024-04-18 17:04:13.722991] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.227 [2024-04-18 17:04:13.723003] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.227 [2024-04-18 17:04:13.723010] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.723017] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b02440) on tqpair=0x1aa2d00 00:16:58.227 [2024-04-18 17:04:13.723029] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:58.227 [2024-04-18 17:04:13.723039] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:58.227 [2024-04-18 17:04:13.723056] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:58.227 [2024-04-18 17:04:13.723068] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:58.227 [2024-04-18 17:04:13.723078] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.723086] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.723092] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aa2d00) 00:16:58.227 [2024-04-18 17:04:13.723103] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.227 [2024-04-18 17:04:13.723138] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b02440, cid 4, qid 0 00:16:58.227 [2024-04-18 17:04:13.723312] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.227 [2024-04-18 17:04:13.723324] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.227 [2024-04-18 17:04:13.723331] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.723338] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b02440) on tqpair=0x1aa2d00 00:16:58.227 [2024-04-18 17:04:13.723399] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:58.227 [2024-04-18 17:04:13.723419] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:58.227 [2024-04-18 17:04:13.723439] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.723447] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aa2d00) 00:16:58.227 [2024-04-18 17:04:13.723457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.227 [2024-04-18 17:04:13.723478] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b02440, cid 4, qid 0 00:16:58.227 [2024-04-18 17:04:13.723626] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.227 [2024-04-18 17:04:13.723642] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.227 [2024-04-18 17:04:13.723648] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.723655] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aa2d00): datao=0, datal=4096, cccid=4 00:16:58.227 [2024-04-18 17:04:13.723662] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b02440) on tqpair(0x1aa2d00): expected_datao=0, payload_size=4096 00:16:58.227 [2024-04-18 17:04:13.723670] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.723689] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.723698] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.723755] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.227 [2024-04-18 17:04:13.723770] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.227 [2024-04-18 17:04:13.723777] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.723784] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b02440) on tqpair=0x1aa2d00 00:16:58.227 [2024-04-18 17:04:13.723799] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:58.227 [2024-04-18 17:04:13.723816] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:58.227 [2024-04-18 17:04:13.723836] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:58.227 [2024-04-18 17:04:13.723850] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.723858] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aa2d00) 00:16:58.227 [2024-04-18 17:04:13.723869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.227 [2024-04-18 17:04:13.723889] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b02440, cid 4, qid 0 00:16:58.227 [2024-04-18 17:04:13.724012] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.227 [2024-04-18 17:04:13.724024] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.227 [2024-04-18 17:04:13.724031] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.724037] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aa2d00): datao=0, datal=4096, cccid=4 00:16:58.227 [2024-04-18 17:04:13.724045] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b02440) on tqpair(0x1aa2d00): expected_datao=0, payload_size=4096 00:16:58.227 [2024-04-18 17:04:13.724052] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.724068] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.724078] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.724110] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.227 [2024-04-18 17:04:13.724121] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.227 [2024-04-18 17:04:13.724127] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.724134] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b02440) on tqpair=0x1aa2d00 00:16:58.227 [2024-04-18 17:04:13.724155] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:58.227 [2024-04-18 17:04:13.724173] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:58.227 [2024-04-18 17:04:13.724186] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.227 [2024-04-18 17:04:13.724194] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aa2d00) 00:16:58.228 [2024-04-18 17:04:13.724205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.228 [2024-04-18 17:04:13.724226] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b02440, cid 4, qid 0 00:16:58.228 [2024-04-18 17:04:13.727393] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.228 [2024-04-18 17:04:13.727410] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.228 [2024-04-18 17:04:13.727417] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.727423] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aa2d00): datao=0, datal=4096, cccid=4 00:16:58.228 [2024-04-18 17:04:13.727431] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b02440) on tqpair(0x1aa2d00): expected_datao=0, payload_size=4096 00:16:58.228 [2024-04-18 17:04:13.727438] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.727448] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.727455] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.727464] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.228 [2024-04-18 17:04:13.727473] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.228 [2024-04-18 17:04:13.727479] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.727486] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b02440) on tqpair=0x1aa2d00 00:16:58.228 [2024-04-18 17:04:13.727506] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:58.228 [2024-04-18 17:04:13.727522] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:58.228 [2024-04-18 17:04:13.727552] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:58.228 [2024-04-18 17:04:13.727563] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:58.228 [2024-04-18 17:04:13.727572] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:58.228 [2024-04-18 17:04:13.727580] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:58.228 [2024-04-18 17:04:13.727588] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:58.228 [2024-04-18 17:04:13.727596] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:58.228 [2024-04-18 17:04:13.727614] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.727623] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aa2d00) 00:16:58.228 [2024-04-18 17:04:13.727633] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.228 [2024-04-18 17:04:13.727644] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.727651] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.727658] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1aa2d00) 00:16:58.228 [2024-04-18 17:04:13.727683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.228 [2024-04-18 17:04:13.727708] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b02440, cid 4, qid 0 00:16:58.228 [2024-04-18 17:04:13.727720] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b025a0, cid 5, qid 0 00:16:58.228 [2024-04-18 17:04:13.727884] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.228 [2024-04-18 17:04:13.727900] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.228 [2024-04-18 17:04:13.727907] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.727913] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b02440) on tqpair=0x1aa2d00 00:16:58.228 [2024-04-18 17:04:13.727925] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.228 [2024-04-18 17:04:13.727934] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.228 [2024-04-18 17:04:13.727940] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.727947] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b025a0) on tqpair=0x1aa2d00 00:16:58.228 [2024-04-18 17:04:13.727964] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.727973] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1aa2d00) 00:16:58.228 [2024-04-18 17:04:13.727983] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.228 [2024-04-18 17:04:13.728004] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b025a0, cid 5, qid 0 00:16:58.228 [2024-04-18 17:04:13.728108] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.228 [2024-04-18 17:04:13.728123] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.228 [2024-04-18 17:04:13.728129] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.728140] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b025a0) on tqpair=0x1aa2d00 00:16:58.228 [2024-04-18 17:04:13.728158] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.728167] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1aa2d00) 00:16:58.228 [2024-04-18 17:04:13.728177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.228 [2024-04-18 17:04:13.728197] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b025a0, cid 5, qid 0 00:16:58.228 [2024-04-18 17:04:13.728300] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.228 [2024-04-18 17:04:13.728313] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.228 [2024-04-18 17:04:13.728320] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.728326] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b025a0) on tqpair=0x1aa2d00 00:16:58.228 [2024-04-18 17:04:13.728343] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.728351] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1aa2d00) 00:16:58.228 [2024-04-18 17:04:13.728362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.228 [2024-04-18 17:04:13.728389] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b025a0, cid 5, qid 0 00:16:58.228 [2024-04-18 17:04:13.728497] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.228 [2024-04-18 17:04:13.728512] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.228 [2024-04-18 17:04:13.728519] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.728526] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b025a0) on tqpair=0x1aa2d00 00:16:58.228 [2024-04-18 17:04:13.728546] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.728557] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1aa2d00) 00:16:58.228 [2024-04-18 17:04:13.728567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.228 [2024-04-18 17:04:13.728579] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.728587] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1aa2d00) 00:16:58.228 [2024-04-18 17:04:13.728596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.228 [2024-04-18 17:04:13.728608] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.728615] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1aa2d00) 00:16:58.228 [2024-04-18 17:04:13.728624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.228 [2024-04-18 17:04:13.728636] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.728644] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1aa2d00) 00:16:58.228 [2024-04-18 17:04:13.728653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.228 [2024-04-18 17:04:13.728675] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b025a0, cid 5, qid 0 00:16:58.228 [2024-04-18 17:04:13.728685] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b02440, cid 4, qid 0 00:16:58.228 [2024-04-18 17:04:13.728693] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b02700, cid 6, qid 0 00:16:58.228 [2024-04-18 17:04:13.728701] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b02860, cid 7, qid 0 00:16:58.228 [2024-04-18 17:04:13.728971] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.228 [2024-04-18 17:04:13.728987] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.228 [2024-04-18 17:04:13.728995] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.228 [2024-04-18 17:04:13.729001] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aa2d00): datao=0, datal=8192, cccid=5 00:16:58.228 [2024-04-18 17:04:13.729009] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b025a0) on tqpair(0x1aa2d00): expected_datao=0, payload_size=8192 00:16:58.228 [2024-04-18 17:04:13.729016] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729026] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729034] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729043] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.229 [2024-04-18 17:04:13.729052] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.229 [2024-04-18 17:04:13.729058] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729064] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aa2d00): datao=0, datal=512, cccid=4 00:16:58.229 [2024-04-18 17:04:13.729072] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b02440) on tqpair(0x1aa2d00): expected_datao=0, payload_size=512 00:16:58.229 [2024-04-18 17:04:13.729079] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729088] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729095] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729104] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.229 [2024-04-18 17:04:13.729113] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.229 [2024-04-18 17:04:13.729119] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729125] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aa2d00): datao=0, datal=512, cccid=6 00:16:58.229 [2024-04-18 17:04:13.729133] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b02700) on tqpair(0x1aa2d00): expected_datao=0, payload_size=512 00:16:58.229 [2024-04-18 17:04:13.729140] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729149] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729156] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729164] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.229 [2024-04-18 17:04:13.729173] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.229 [2024-04-18 17:04:13.729180] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729186] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1aa2d00): datao=0, datal=4096, cccid=7 00:16:58.229 [2024-04-18 17:04:13.729194] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b02860) on tqpair(0x1aa2d00): expected_datao=0, payload_size=4096 00:16:58.229 [2024-04-18 17:04:13.729201] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729211] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729218] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729230] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.229 [2024-04-18 17:04:13.729240] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.229 [2024-04-18 17:04:13.729246] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729253] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b025a0) on tqpair=0x1aa2d00 00:16:58.229 [2024-04-18 17:04:13.729274] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.229 [2024-04-18 17:04:13.729286] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.229 [2024-04-18 17:04:13.729295] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729302] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b02440) on tqpair=0x1aa2d00 00:16:58.229 [2024-04-18 17:04:13.729332] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.229 [2024-04-18 17:04:13.729343] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.229 [2024-04-18 17:04:13.729349] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729356] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b02700) on tqpair=0x1aa2d00 00:16:58.229 [2024-04-18 17:04:13.729367] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.229 [2024-04-18 17:04:13.729401] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.229 [2024-04-18 17:04:13.729408] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.229 [2024-04-18 17:04:13.729414] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b02860) on tqpair=0x1aa2d00 00:16:58.229 ===================================================== 00:16:58.229 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:58.229 ===================================================== 00:16:58.229 Controller Capabilities/Features 00:16:58.229 ================================ 00:16:58.229 Vendor ID: 8086 00:16:58.229 Subsystem Vendor ID: 8086 00:16:58.229 Serial Number: SPDK00000000000001 00:16:58.229 Model Number: SPDK bdev Controller 00:16:58.229 Firmware Version: 24.05 00:16:58.229 Recommended Arb Burst: 6 00:16:58.229 IEEE OUI Identifier: e4 d2 5c 00:16:58.229 Multi-path I/O 00:16:58.229 May have multiple subsystem ports: Yes 00:16:58.229 May have multiple controllers: Yes 00:16:58.229 Associated with SR-IOV VF: No 00:16:58.229 Max Data Transfer Size: 131072 00:16:58.229 Max Number of Namespaces: 32 00:16:58.229 Max Number of I/O Queues: 127 00:16:58.229 NVMe Specification Version (VS): 1.3 00:16:58.229 NVMe Specification Version (Identify): 1.3 00:16:58.229 Maximum Queue Entries: 128 00:16:58.229 Contiguous Queues Required: Yes 00:16:58.229 Arbitration Mechanisms Supported 00:16:58.229 Weighted Round Robin: Not Supported 00:16:58.229 Vendor Specific: Not Supported 00:16:58.229 Reset Timeout: 15000 ms 00:16:58.229 Doorbell Stride: 4 bytes 00:16:58.229 NVM Subsystem Reset: Not Supported 00:16:58.229 Command Sets Supported 00:16:58.229 NVM Command Set: Supported 00:16:58.229 Boot Partition: Not Supported 00:16:58.229 Memory Page Size Minimum: 4096 bytes 00:16:58.229 Memory Page Size Maximum: 4096 bytes 00:16:58.229 Persistent Memory Region: Not Supported 00:16:58.229 Optional Asynchronous Events Supported 00:16:58.229 Namespace Attribute Notices: Supported 00:16:58.229 Firmware Activation Notices: Not Supported 00:16:58.229 ANA Change Notices: Not Supported 00:16:58.229 PLE Aggregate Log Change Notices: Not Supported 00:16:58.229 LBA Status Info Alert Notices: Not Supported 00:16:58.229 EGE Aggregate Log Change Notices: Not Supported 00:16:58.229 Normal NVM Subsystem Shutdown event: Not Supported 00:16:58.229 Zone Descriptor Change Notices: Not Supported 00:16:58.229 Discovery Log Change Notices: Not Supported 00:16:58.229 Controller Attributes 00:16:58.229 128-bit Host Identifier: Supported 00:16:58.229 Non-Operational Permissive Mode: Not Supported 00:16:58.229 NVM Sets: Not Supported 00:16:58.229 Read Recovery Levels: Not Supported 00:16:58.229 Endurance Groups: Not Supported 00:16:58.229 Predictable Latency Mode: Not Supported 00:16:58.229 Traffic Based Keep ALive: Not Supported 00:16:58.229 Namespace Granularity: Not Supported 00:16:58.229 SQ Associations: Not Supported 00:16:58.229 UUID List: Not Supported 00:16:58.229 Multi-Domain Subsystem: Not Supported 00:16:58.229 Fixed Capacity Management: Not Supported 00:16:58.229 Variable Capacity Management: Not Supported 00:16:58.229 Delete Endurance Group: Not Supported 00:16:58.229 Delete NVM Set: Not Supported 00:16:58.229 Extended LBA Formats Supported: Not Supported 00:16:58.229 Flexible Data Placement Supported: Not Supported 00:16:58.229 00:16:58.229 Controller Memory Buffer Support 00:16:58.230 ================================ 00:16:58.230 Supported: No 00:16:58.230 00:16:58.230 Persistent Memory Region Support 00:16:58.230 ================================ 00:16:58.230 Supported: No 00:16:58.230 00:16:58.230 Admin Command Set Attributes 00:16:58.230 ============================ 00:16:58.230 Security Send/Receive: Not Supported 00:16:58.230 Format NVM: Not Supported 00:16:58.230 Firmware Activate/Download: Not Supported 00:16:58.230 Namespace Management: Not Supported 00:16:58.230 Device Self-Test: Not Supported 00:16:58.230 Directives: Not Supported 00:16:58.230 NVMe-MI: Not Supported 00:16:58.230 Virtualization Management: Not Supported 00:16:58.230 Doorbell Buffer Config: Not Supported 00:16:58.230 Get LBA Status Capability: Not Supported 00:16:58.230 Command & Feature Lockdown Capability: Not Supported 00:16:58.230 Abort Command Limit: 4 00:16:58.230 Async Event Request Limit: 4 00:16:58.230 Number of Firmware Slots: N/A 00:16:58.230 Firmware Slot 1 Read-Only: N/A 00:16:58.230 Firmware Activation Without Reset: N/A 00:16:58.230 Multiple Update Detection Support: N/A 00:16:58.230 Firmware Update Granularity: No Information Provided 00:16:58.230 Per-Namespace SMART Log: No 00:16:58.230 Asymmetric Namespace Access Log Page: Not Supported 00:16:58.230 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:58.230 Command Effects Log Page: Supported 00:16:58.230 Get Log Page Extended Data: Supported 00:16:58.230 Telemetry Log Pages: Not Supported 00:16:58.230 Persistent Event Log Pages: Not Supported 00:16:58.230 Supported Log Pages Log Page: May Support 00:16:58.230 Commands Supported & Effects Log Page: Not Supported 00:16:58.230 Feature Identifiers & Effects Log Page:May Support 00:16:58.230 NVMe-MI Commands & Effects Log Page: May Support 00:16:58.230 Data Area 4 for Telemetry Log: Not Supported 00:16:58.230 Error Log Page Entries Supported: 128 00:16:58.230 Keep Alive: Supported 00:16:58.230 Keep Alive Granularity: 10000 ms 00:16:58.230 00:16:58.230 NVM Command Set Attributes 00:16:58.230 ========================== 00:16:58.230 Submission Queue Entry Size 00:16:58.230 Max: 64 00:16:58.230 Min: 64 00:16:58.230 Completion Queue Entry Size 00:16:58.230 Max: 16 00:16:58.230 Min: 16 00:16:58.230 Number of Namespaces: 32 00:16:58.230 Compare Command: Supported 00:16:58.230 Write Uncorrectable Command: Not Supported 00:16:58.230 Dataset Management Command: Supported 00:16:58.230 Write Zeroes Command: Supported 00:16:58.230 Set Features Save Field: Not Supported 00:16:58.230 Reservations: Supported 00:16:58.230 Timestamp: Not Supported 00:16:58.230 Copy: Supported 00:16:58.230 Volatile Write Cache: Present 00:16:58.230 Atomic Write Unit (Normal): 1 00:16:58.230 Atomic Write Unit (PFail): 1 00:16:58.230 Atomic Compare & Write Unit: 1 00:16:58.230 Fused Compare & Write: Supported 00:16:58.230 Scatter-Gather List 00:16:58.230 SGL Command Set: Supported 00:16:58.230 SGL Keyed: Supported 00:16:58.230 SGL Bit Bucket Descriptor: Not Supported 00:16:58.230 SGL Metadata Pointer: Not Supported 00:16:58.230 Oversized SGL: Not Supported 00:16:58.230 SGL Metadata Address: Not Supported 00:16:58.230 SGL Offset: Supported 00:16:58.230 Transport SGL Data Block: Not Supported 00:16:58.230 Replay Protected Memory Block: Not Supported 00:16:58.230 00:16:58.230 Firmware Slot Information 00:16:58.230 ========================= 00:16:58.230 Active slot: 1 00:16:58.230 Slot 1 Firmware Revision: 24.05 00:16:58.230 00:16:58.230 00:16:58.230 Commands Supported and Effects 00:16:58.230 ============================== 00:16:58.230 Admin Commands 00:16:58.230 -------------- 00:16:58.230 Get Log Page (02h): Supported 00:16:58.230 Identify (06h): Supported 00:16:58.230 Abort (08h): Supported 00:16:58.230 Set Features (09h): Supported 00:16:58.230 Get Features (0Ah): Supported 00:16:58.230 Asynchronous Event Request (0Ch): Supported 00:16:58.230 Keep Alive (18h): Supported 00:16:58.230 I/O Commands 00:16:58.230 ------------ 00:16:58.230 Flush (00h): Supported LBA-Change 00:16:58.230 Write (01h): Supported LBA-Change 00:16:58.230 Read (02h): Supported 00:16:58.230 Compare (05h): Supported 00:16:58.230 Write Zeroes (08h): Supported LBA-Change 00:16:58.230 Dataset Management (09h): Supported LBA-Change 00:16:58.230 Copy (19h): Supported LBA-Change 00:16:58.230 Unknown (79h): Supported LBA-Change 00:16:58.230 Unknown (7Ah): Supported 00:16:58.230 00:16:58.230 Error Log 00:16:58.230 ========= 00:16:58.230 00:16:58.230 Arbitration 00:16:58.230 =========== 00:16:58.230 Arbitration Burst: 1 00:16:58.230 00:16:58.230 Power Management 00:16:58.230 ================ 00:16:58.230 Number of Power States: 1 00:16:58.230 Current Power State: Power State #0 00:16:58.230 Power State #0: 00:16:58.230 Max Power: 0.00 W 00:16:58.230 Non-Operational State: Operational 00:16:58.230 Entry Latency: Not Reported 00:16:58.230 Exit Latency: Not Reported 00:16:58.230 Relative Read Throughput: 0 00:16:58.230 Relative Read Latency: 0 00:16:58.230 Relative Write Throughput: 0 00:16:58.230 Relative Write Latency: 0 00:16:58.230 Idle Power: Not Reported 00:16:58.230 Active Power: Not Reported 00:16:58.230 Non-Operational Permissive Mode: Not Supported 00:16:58.230 00:16:58.230 Health Information 00:16:58.230 ================== 00:16:58.230 Critical Warnings: 00:16:58.230 Available Spare Space: OK 00:16:58.230 Temperature: OK 00:16:58.230 Device Reliability: OK 00:16:58.230 Read Only: No 00:16:58.230 Volatile Memory Backup: OK 00:16:58.230 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:58.230 Temperature Threshold: [2024-04-18 17:04:13.729573] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.230 [2024-04-18 17:04:13.729585] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1aa2d00) 00:16:58.230 [2024-04-18 17:04:13.729596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.230 [2024-04-18 17:04:13.729618] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b02860, cid 7, qid 0 00:16:58.230 [2024-04-18 17:04:13.729773] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.230 [2024-04-18 17:04:13.729789] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.230 [2024-04-18 17:04:13.729796] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.230 [2024-04-18 17:04:13.729802] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b02860) on tqpair=0x1aa2d00 00:16:58.230 [2024-04-18 17:04:13.729842] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:58.230 [2024-04-18 17:04:13.729863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.230 [2024-04-18 17:04:13.729874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.230 [2024-04-18 17:04:13.729884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.231 [2024-04-18 17:04:13.729893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.231 [2024-04-18 17:04:13.729905] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.729913] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.729919] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aa2d00) 00:16:58.231 [2024-04-18 17:04:13.729944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.231 [2024-04-18 17:04:13.729966] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b022e0, cid 3, qid 0 00:16:58.231 [2024-04-18 17:04:13.730130] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.231 [2024-04-18 17:04:13.730143] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.231 [2024-04-18 17:04:13.730149] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730156] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b022e0) on tqpair=0x1aa2d00 00:16:58.231 [2024-04-18 17:04:13.730168] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730176] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730182] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aa2d00) 00:16:58.231 [2024-04-18 17:04:13.730196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.231 [2024-04-18 17:04:13.730223] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b022e0, cid 3, qid 0 00:16:58.231 [2024-04-18 17:04:13.730334] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.231 [2024-04-18 17:04:13.730347] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.231 [2024-04-18 17:04:13.730353] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730360] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b022e0) on tqpair=0x1aa2d00 00:16:58.231 [2024-04-18 17:04:13.730369] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:58.231 [2024-04-18 17:04:13.730377] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:58.231 [2024-04-18 17:04:13.730401] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730411] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730417] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aa2d00) 00:16:58.231 [2024-04-18 17:04:13.730428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.231 [2024-04-18 17:04:13.730448] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b022e0, cid 3, qid 0 00:16:58.231 [2024-04-18 17:04:13.730550] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.231 [2024-04-18 17:04:13.730562] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.231 [2024-04-18 17:04:13.730569] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730576] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b022e0) on tqpair=0x1aa2d00 00:16:58.231 [2024-04-18 17:04:13.730593] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730601] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730608] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aa2d00) 00:16:58.231 [2024-04-18 17:04:13.730618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.231 [2024-04-18 17:04:13.730638] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b022e0, cid 3, qid 0 00:16:58.231 [2024-04-18 17:04:13.730733] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.231 [2024-04-18 17:04:13.730745] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.231 [2024-04-18 17:04:13.730752] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730759] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b022e0) on tqpair=0x1aa2d00 00:16:58.231 [2024-04-18 17:04:13.730775] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730784] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730790] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aa2d00) 00:16:58.231 [2024-04-18 17:04:13.730801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.231 [2024-04-18 17:04:13.730820] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b022e0, cid 3, qid 0 00:16:58.231 [2024-04-18 17:04:13.730913] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.231 [2024-04-18 17:04:13.730925] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.231 [2024-04-18 17:04:13.730932] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730939] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b022e0) on tqpair=0x1aa2d00 00:16:58.231 [2024-04-18 17:04:13.730956] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730964] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.730974] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aa2d00) 00:16:58.231 [2024-04-18 17:04:13.730985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.231 [2024-04-18 17:04:13.731006] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b022e0, cid 3, qid 0 00:16:58.231 [2024-04-18 17:04:13.731105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.231 [2024-04-18 17:04:13.731120] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.231 [2024-04-18 17:04:13.731127] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.731133] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b022e0) on tqpair=0x1aa2d00 00:16:58.231 [2024-04-18 17:04:13.731151] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.731159] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.731166] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aa2d00) 00:16:58.231 [2024-04-18 17:04:13.731176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.231 [2024-04-18 17:04:13.731196] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b022e0, cid 3, qid 0 00:16:58.231 [2024-04-18 17:04:13.731294] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.231 [2024-04-18 17:04:13.731309] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.231 [2024-04-18 17:04:13.731316] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.731322] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b022e0) on tqpair=0x1aa2d00 00:16:58.231 [2024-04-18 17:04:13.731340] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.731349] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.731355] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aa2d00) 00:16:58.231 [2024-04-18 17:04:13.731365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.231 [2024-04-18 17:04:13.735391] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b022e0, cid 3, qid 0 00:16:58.231 [2024-04-18 17:04:13.735411] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.231 [2024-04-18 17:04:13.735422] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.231 [2024-04-18 17:04:13.735429] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.735435] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b022e0) on tqpair=0x1aa2d00 00:16:58.231 [2024-04-18 17:04:13.735468] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.735478] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.735484] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1aa2d00) 00:16:58.231 [2024-04-18 17:04:13.735495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.231 [2024-04-18 17:04:13.735517] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b022e0, cid 3, qid 0 00:16:58.231 [2024-04-18 17:04:13.735670] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.231 [2024-04-18 17:04:13.735685] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.231 [2024-04-18 17:04:13.735693] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.231 [2024-04-18 17:04:13.735699] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b022e0) on tqpair=0x1aa2d00 00:16:58.231 [2024-04-18 17:04:13.735714] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:16:58.231 0 Kelvin (-273 Celsius) 00:16:58.231 Available Spare: 0% 00:16:58.231 Available Spare Threshold: 0% 00:16:58.231 Life Percentage Used: 0% 00:16:58.231 Data Units Read: 0 00:16:58.231 Data Units Written: 0 00:16:58.231 Host Read Commands: 0 00:16:58.231 Host Write Commands: 0 00:16:58.231 Controller Busy Time: 0 minutes 00:16:58.231 Power Cycles: 0 00:16:58.231 Power On Hours: 0 hours 00:16:58.231 Unsafe Shutdowns: 0 00:16:58.231 Unrecoverable Media Errors: 0 00:16:58.231 Lifetime Error Log Entries: 0 00:16:58.231 Warning Temperature Time: 0 minutes 00:16:58.231 Critical Temperature Time: 0 minutes 00:16:58.231 00:16:58.231 Number of Queues 00:16:58.231 ================ 00:16:58.231 Number of I/O Submission Queues: 127 00:16:58.231 Number of I/O Completion Queues: 127 00:16:58.231 00:16:58.231 Active Namespaces 00:16:58.231 ================= 00:16:58.231 Namespace ID:1 00:16:58.231 Error Recovery Timeout: Unlimited 00:16:58.231 Command Set Identifier: NVM (00h) 00:16:58.231 Deallocate: Supported 00:16:58.232 Deallocated/Unwritten Error: Not Supported 00:16:58.232 Deallocated Read Value: Unknown 00:16:58.232 Deallocate in Write Zeroes: Not Supported 00:16:58.232 Deallocated Guard Field: 0xFFFF 00:16:58.232 Flush: Supported 00:16:58.232 Reservation: Supported 00:16:58.232 Namespace Sharing Capabilities: Multiple Controllers 00:16:58.232 Size (in LBAs): 131072 (0GiB) 00:16:58.232 Capacity (in LBAs): 131072 (0GiB) 00:16:58.232 Utilization (in LBAs): 131072 (0GiB) 00:16:58.232 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:58.232 EUI64: ABCDEF0123456789 00:16:58.232 UUID: eac31345-f186-409c-aa7c-9e579017c961 00:16:58.232 Thin Provisioning: Not Supported 00:16:58.232 Per-NS Atomic Units: Yes 00:16:58.232 Atomic Boundary Size (Normal): 0 00:16:58.232 Atomic Boundary Size (PFail): 0 00:16:58.232 Atomic Boundary Offset: 0 00:16:58.232 Maximum Single Source Range Length: 65535 00:16:58.232 Maximum Copy Length: 65535 00:16:58.232 Maximum Source Range Count: 1 00:16:58.232 NGUID/EUI64 Never Reused: No 00:16:58.232 Namespace Write Protected: No 00:16:58.232 Number of LBA Formats: 1 00:16:58.232 Current LBA Format: LBA Format #00 00:16:58.232 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:58.232 00:16:58.232 17:04:13 -- host/identify.sh@51 -- # sync 00:16:58.232 17:04:13 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.232 17:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.232 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:16:58.232 17:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.232 17:04:13 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:58.232 17:04:13 -- host/identify.sh@56 -- # nvmftestfini 00:16:58.232 17:04:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:58.232 17:04:13 -- nvmf/common.sh@117 -- # sync 00:16:58.232 17:04:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.232 17:04:13 -- nvmf/common.sh@120 -- # set +e 00:16:58.232 17:04:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.232 17:04:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.232 rmmod nvme_tcp 00:16:58.232 rmmod nvme_fabrics 00:16:58.232 rmmod nvme_keyring 00:16:58.232 17:04:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.232 17:04:13 -- nvmf/common.sh@124 -- # set -e 00:16:58.232 17:04:13 -- nvmf/common.sh@125 -- # return 0 00:16:58.232 17:04:13 -- nvmf/common.sh@478 -- # '[' -n 1720720 ']' 00:16:58.232 17:04:13 -- nvmf/common.sh@479 -- # killprocess 1720720 00:16:58.232 17:04:13 -- common/autotest_common.sh@936 -- # '[' -z 1720720 ']' 00:16:58.232 17:04:13 -- common/autotest_common.sh@940 -- # kill -0 1720720 00:16:58.232 17:04:13 -- common/autotest_common.sh@941 -- # uname 00:16:58.232 17:04:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:58.232 17:04:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1720720 00:16:58.232 17:04:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:58.232 17:04:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:58.232 17:04:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1720720' 00:16:58.232 killing process with pid 1720720 00:16:58.232 17:04:13 -- common/autotest_common.sh@955 -- # kill 1720720 00:16:58.232 [2024-04-18 17:04:13.827458] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:58.232 17:04:13 -- common/autotest_common.sh@960 -- # wait 1720720 00:16:58.492 17:04:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:58.492 17:04:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:58.492 17:04:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:58.492 17:04:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.492 17:04:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.492 17:04:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.492 17:04:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.492 17:04:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.030 17:04:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:01.030 00:17:01.030 real 0m5.919s 00:17:01.030 user 0m6.851s 00:17:01.030 sys 0m1.827s 00:17:01.030 17:04:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:01.030 17:04:16 -- common/autotest_common.sh@10 -- # set +x 00:17:01.030 ************************************ 00:17:01.030 END TEST nvmf_identify 00:17:01.030 ************************************ 00:17:01.030 17:04:16 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:01.030 17:04:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:01.030 17:04:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.030 17:04:16 -- common/autotest_common.sh@10 -- # set +x 00:17:01.030 ************************************ 00:17:01.030 START TEST nvmf_perf 00:17:01.030 ************************************ 00:17:01.030 17:04:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:01.030 * Looking for test storage... 00:17:01.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:01.030 17:04:16 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.030 17:04:16 -- nvmf/common.sh@7 -- # uname -s 00:17:01.030 17:04:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.030 17:04:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.030 17:04:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.030 17:04:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.030 17:04:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.030 17:04:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.030 17:04:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.030 17:04:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.030 17:04:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.030 17:04:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.030 17:04:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.030 17:04:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.030 17:04:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.030 17:04:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.030 17:04:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.030 17:04:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.030 17:04:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.030 17:04:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.030 17:04:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.030 17:04:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.030 17:04:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.030 17:04:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.030 17:04:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.030 17:04:16 -- paths/export.sh@5 -- # export PATH 00:17:01.030 17:04:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.030 17:04:16 -- nvmf/common.sh@47 -- # : 0 00:17:01.030 17:04:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.031 17:04:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.031 17:04:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.031 17:04:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.031 17:04:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.031 17:04:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.031 17:04:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.031 17:04:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.031 17:04:16 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:01.031 17:04:16 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:01.031 17:04:16 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.031 17:04:16 -- host/perf.sh@17 -- # nvmftestinit 00:17:01.031 17:04:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:01.031 17:04:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.031 17:04:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:01.031 17:04:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:01.031 17:04:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:01.031 17:04:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.031 17:04:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.031 17:04:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.031 17:04:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:01.031 17:04:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:01.031 17:04:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:01.031 17:04:16 -- common/autotest_common.sh@10 -- # set +x 00:17:02.932 17:04:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:02.932 17:04:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:02.932 17:04:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:02.932 17:04:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:02.932 17:04:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:02.932 17:04:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:02.932 17:04:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:02.932 17:04:18 -- nvmf/common.sh@295 -- # net_devs=() 00:17:02.932 17:04:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:02.932 17:04:18 -- nvmf/common.sh@296 -- # e810=() 00:17:02.932 17:04:18 -- nvmf/common.sh@296 -- # local -ga e810 00:17:02.932 17:04:18 -- nvmf/common.sh@297 -- # x722=() 00:17:02.932 17:04:18 -- nvmf/common.sh@297 -- # local -ga x722 00:17:02.932 17:04:18 -- nvmf/common.sh@298 -- # mlx=() 00:17:02.932 17:04:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:02.932 17:04:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.933 17:04:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.933 17:04:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.933 17:04:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.933 17:04:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.933 17:04:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.933 17:04:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.933 17:04:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.933 17:04:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.933 17:04:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.933 17:04:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.933 17:04:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:02.933 17:04:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:02.933 17:04:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:02.933 17:04:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.933 17:04:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:02.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:02.933 17:04:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.933 17:04:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:02.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:02.933 17:04:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:02.933 17:04:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.933 17:04:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.933 17:04:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:02.933 17:04:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.933 17:04:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:02.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:02.933 17:04:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.933 17:04:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.933 17:04:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.933 17:04:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:02.933 17:04:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.933 17:04:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:02.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:02.933 17:04:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.933 17:04:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:02.933 17:04:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:02.933 17:04:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:02.933 17:04:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.933 17:04:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.933 17:04:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.933 17:04:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:02.933 17:04:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.933 17:04:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.933 17:04:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:02.933 17:04:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.933 17:04:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.933 17:04:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:02.933 17:04:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:02.933 17:04:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.933 17:04:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.933 17:04:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.933 17:04:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.933 17:04:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:02.933 17:04:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.933 17:04:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.933 17:04:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.933 17:04:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:02.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:17:02.933 00:17:02.933 --- 10.0.0.2 ping statistics --- 00:17:02.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.933 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:17:02.933 17:04:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:17:02.933 00:17:02.933 --- 10.0.0.1 ping statistics --- 00:17:02.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.933 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:17:02.933 17:04:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.933 17:04:18 -- nvmf/common.sh@411 -- # return 0 00:17:02.933 17:04:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:02.933 17:04:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.933 17:04:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:02.933 17:04:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.933 17:04:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:02.933 17:04:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:02.933 17:04:18 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:02.933 17:04:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:02.933 17:04:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:02.933 17:04:18 -- common/autotest_common.sh@10 -- # set +x 00:17:02.933 17:04:18 -- nvmf/common.sh@470 -- # nvmfpid=1722821 00:17:02.933 17:04:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:02.933 17:04:18 -- nvmf/common.sh@471 -- # waitforlisten 1722821 00:17:02.933 17:04:18 -- common/autotest_common.sh@817 -- # '[' -z 1722821 ']' 00:17:02.933 17:04:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.933 17:04:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:02.933 17:04:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.933 17:04:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:02.933 17:04:18 -- common/autotest_common.sh@10 -- # set +x 00:17:02.933 [2024-04-18 17:04:18.502037] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:17:02.933 [2024-04-18 17:04:18.502112] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.933 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.933 [2024-04-18 17:04:18.565435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.191 [2024-04-18 17:04:18.675214] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.191 [2024-04-18 17:04:18.675278] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.191 [2024-04-18 17:04:18.675294] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.191 [2024-04-18 17:04:18.675307] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.191 [2024-04-18 17:04:18.675319] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.191 [2024-04-18 17:04:18.675409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.191 [2024-04-18 17:04:18.675454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.191 [2024-04-18 17:04:18.675544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.191 [2024-04-18 17:04:18.675547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.129 17:04:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:04.129 17:04:19 -- common/autotest_common.sh@850 -- # return 0 00:17:04.129 17:04:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:04.129 17:04:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:04.129 17:04:19 -- common/autotest_common.sh@10 -- # set +x 00:17:04.129 17:04:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.129 17:04:19 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:04.129 17:04:19 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:07.412 17:04:22 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:07.412 17:04:22 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:07.412 17:04:22 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:17:07.412 17:04:22 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.412 17:04:23 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:07.412 17:04:23 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:17:07.412 17:04:23 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:07.412 17:04:23 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:07.412 17:04:23 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:07.670 [2024-04-18 17:04:23.317449] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.670 17:04:23 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.927 17:04:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:07.927 17:04:23 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:08.184 17:04:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:08.184 17:04:23 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:08.441 17:04:24 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.699 [2024-04-18 17:04:24.284978] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.699 17:04:24 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:08.955 17:04:24 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:17:08.956 17:04:24 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:17:08.956 17:04:24 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:08.956 17:04:24 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:17:10.327 Initializing NVMe Controllers 00:17:10.327 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:17:10.327 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:17:10.327 Initialization complete. Launching workers. 00:17:10.327 ======================================================== 00:17:10.327 Latency(us) 00:17:10.327 Device Information : IOPS MiB/s Average min max 00:17:10.327 PCIE (0000:88:00.0) NSID 1 from core 0: 84574.90 330.37 377.81 32.50 4505.13 00:17:10.327 ======================================================== 00:17:10.327 Total : 84574.90 330.37 377.81 32.50 4505.13 00:17:10.327 00:17:10.327 17:04:25 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:10.327 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.703 Initializing NVMe Controllers 00:17:11.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:11.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:11.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:11.703 Initialization complete. Launching workers. 00:17:11.703 ======================================================== 00:17:11.703 Latency(us) 00:17:11.703 Device Information : IOPS MiB/s Average min max 00:17:11.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 144.00 0.56 7051.72 160.16 45698.90 00:17:11.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18702.97 7940.02 47898.00 00:17:11.703 ======================================================== 00:17:11.703 Total : 200.00 0.78 10314.07 160.16 47898.00 00:17:11.703 00:17:11.703 17:04:26 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:11.703 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.639 Initializing NVMe Controllers 00:17:12.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:12.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:12.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:12.639 Initialization complete. Launching workers. 00:17:12.639 ======================================================== 00:17:12.639 Latency(us) 00:17:12.639 Device Information : IOPS MiB/s Average min max 00:17:12.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8472.49 33.10 3779.09 545.79 7824.04 00:17:12.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3832.32 14.97 8392.99 6880.09 16222.08 00:17:12.639 ======================================================== 00:17:12.639 Total : 12304.81 48.07 5216.08 545.79 16222.08 00:17:12.639 00:17:12.639 17:04:28 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:17:12.639 17:04:28 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:17:12.639 17:04:28 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:12.639 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.172 Initializing NVMe Controllers 00:17:15.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:15.172 Controller IO queue size 128, less than required. 00:17:15.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.172 Controller IO queue size 128, less than required. 00:17:15.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:15.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:15.172 Initialization complete. Launching workers. 00:17:15.172 ======================================================== 00:17:15.172 Latency(us) 00:17:15.172 Device Information : IOPS MiB/s Average min max 00:17:15.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1515.22 378.81 85928.57 55696.81 159233.73 00:17:15.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.31 149.33 225028.07 84803.99 328598.56 00:17:15.172 ======================================================== 00:17:15.172 Total : 2112.54 528.13 125258.57 55696.81 328598.56 00:17:15.172 00:17:15.172 17:04:30 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:15.172 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.430 No valid NVMe controllers or AIO or URING devices found 00:17:15.430 Initializing NVMe Controllers 00:17:15.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:15.430 Controller IO queue size 128, less than required. 00:17:15.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.430 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:15.430 Controller IO queue size 128, less than required. 00:17:15.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.430 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:17:15.430 WARNING: Some requested NVMe devices were skipped 00:17:15.430 17:04:30 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:15.430 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.720 Initializing NVMe Controllers 00:17:18.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:18.720 Controller IO queue size 128, less than required. 00:17:18.720 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:18.720 Controller IO queue size 128, less than required. 00:17:18.720 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:18.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:18.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:18.720 Initialization complete. Launching workers. 00:17:18.720 00:17:18.720 ==================== 00:17:18.720 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:18.720 TCP transport: 00:17:18.720 polls: 14398 00:17:18.720 idle_polls: 10624 00:17:18.720 sock_completions: 3774 00:17:18.720 nvme_completions: 4603 00:17:18.720 submitted_requests: 6958 00:17:18.720 queued_requests: 1 00:17:18.720 00:17:18.720 ==================== 00:17:18.720 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:18.720 TCP transport: 00:17:18.720 polls: 14564 00:17:18.720 idle_polls: 8591 00:17:18.720 sock_completions: 5973 00:17:18.720 nvme_completions: 4431 00:17:18.720 submitted_requests: 6624 00:17:18.720 queued_requests: 1 00:17:18.720 ======================================================== 00:17:18.720 Latency(us) 00:17:18.721 Device Information : IOPS MiB/s Average min max 00:17:18.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1148.02 287.00 114431.35 66487.33 174265.54 00:17:18.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1105.11 276.28 117020.04 50731.45 163410.06 00:17:18.721 ======================================================== 00:17:18.721 Total : 2253.13 563.28 115701.05 50731.45 174265.54 00:17:18.721 00:17:18.721 17:04:33 -- host/perf.sh@66 -- # sync 00:17:18.721 17:04:33 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.721 17:04:33 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:18.721 17:04:33 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:18.721 17:04:33 -- host/perf.sh@114 -- # nvmftestfini 00:17:18.721 17:04:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:18.721 17:04:33 -- nvmf/common.sh@117 -- # sync 00:17:18.721 17:04:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.721 17:04:33 -- nvmf/common.sh@120 -- # set +e 00:17:18.721 17:04:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.721 17:04:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.721 rmmod nvme_tcp 00:17:18.721 rmmod nvme_fabrics 00:17:18.721 rmmod nvme_keyring 00:17:18.721 17:04:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.721 17:04:33 -- nvmf/common.sh@124 -- # set -e 00:17:18.721 17:04:33 -- nvmf/common.sh@125 -- # return 0 00:17:18.721 17:04:33 -- nvmf/common.sh@478 -- # '[' -n 1722821 ']' 00:17:18.721 17:04:33 -- nvmf/common.sh@479 -- # killprocess 1722821 00:17:18.721 17:04:33 -- common/autotest_common.sh@936 -- # '[' -z 1722821 ']' 00:17:18.721 17:04:33 -- common/autotest_common.sh@940 -- # kill -0 1722821 00:17:18.721 17:04:33 -- common/autotest_common.sh@941 -- # uname 00:17:18.721 17:04:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.721 17:04:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1722821 00:17:18.721 17:04:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:18.721 17:04:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:18.721 17:04:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1722821' 00:17:18.721 killing process with pid 1722821 00:17:18.721 17:04:34 -- common/autotest_common.sh@955 -- # kill 1722821 00:17:18.721 17:04:34 -- common/autotest_common.sh@960 -- # wait 1722821 00:17:20.186 17:04:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:20.186 17:04:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:20.186 17:04:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:20.186 17:04:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.186 17:04:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.186 17:04:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.186 17:04:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.186 17:04:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.092 17:04:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:22.092 00:17:22.092 real 0m21.412s 00:17:22.092 user 1m6.144s 00:17:22.092 sys 0m5.215s 00:17:22.092 17:04:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:22.092 17:04:37 -- common/autotest_common.sh@10 -- # set +x 00:17:22.092 ************************************ 00:17:22.092 END TEST nvmf_perf 00:17:22.092 ************************************ 00:17:22.092 17:04:37 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:22.092 17:04:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:22.092 17:04:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:22.092 17:04:37 -- common/autotest_common.sh@10 -- # set +x 00:17:22.351 ************************************ 00:17:22.351 START TEST nvmf_fio_host 00:17:22.351 ************************************ 00:17:22.351 17:04:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:22.351 * Looking for test storage... 00:17:22.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:22.351 17:04:37 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.351 17:04:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.351 17:04:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.351 17:04:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.351 17:04:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.351 17:04:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.351 17:04:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.351 17:04:37 -- paths/export.sh@5 -- # export PATH 00:17:22.351 17:04:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.351 17:04:37 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.351 17:04:37 -- nvmf/common.sh@7 -- # uname -s 00:17:22.351 17:04:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.351 17:04:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.351 17:04:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.351 17:04:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.351 17:04:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.351 17:04:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.351 17:04:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.351 17:04:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.351 17:04:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.351 17:04:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.351 17:04:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.351 17:04:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.351 17:04:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.351 17:04:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.351 17:04:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.351 17:04:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.351 17:04:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.351 17:04:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.351 17:04:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.351 17:04:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.351 17:04:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.351 17:04:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.351 17:04:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.351 17:04:37 -- paths/export.sh@5 -- # export PATH 00:17:22.351 17:04:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.351 17:04:37 -- nvmf/common.sh@47 -- # : 0 00:17:22.351 17:04:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.351 17:04:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.351 17:04:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.351 17:04:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.351 17:04:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.351 17:04:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.351 17:04:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.351 17:04:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.351 17:04:37 -- host/fio.sh@12 -- # nvmftestinit 00:17:22.351 17:04:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:22.351 17:04:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.351 17:04:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:22.351 17:04:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:22.351 17:04:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:22.351 17:04:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.351 17:04:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.351 17:04:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.351 17:04:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:22.351 17:04:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:22.351 17:04:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:22.351 17:04:37 -- common/autotest_common.sh@10 -- # set +x 00:17:24.257 17:04:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:24.257 17:04:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:24.257 17:04:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:24.257 17:04:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:24.257 17:04:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:24.257 17:04:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:24.257 17:04:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:24.257 17:04:39 -- nvmf/common.sh@295 -- # net_devs=() 00:17:24.257 17:04:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:24.257 17:04:39 -- nvmf/common.sh@296 -- # e810=() 00:17:24.257 17:04:39 -- nvmf/common.sh@296 -- # local -ga e810 00:17:24.257 17:04:39 -- nvmf/common.sh@297 -- # x722=() 00:17:24.257 17:04:39 -- nvmf/common.sh@297 -- # local -ga x722 00:17:24.257 17:04:39 -- nvmf/common.sh@298 -- # mlx=() 00:17:24.257 17:04:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:24.257 17:04:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.257 17:04:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.257 17:04:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.257 17:04:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.257 17:04:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.257 17:04:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.257 17:04:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.257 17:04:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.257 17:04:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.257 17:04:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.257 17:04:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.257 17:04:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:24.257 17:04:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:24.257 17:04:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:24.257 17:04:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.257 17:04:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:24.257 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:24.257 17:04:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.257 17:04:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:24.257 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:24.257 17:04:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:24.257 17:04:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.257 17:04:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.257 17:04:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:24.257 17:04:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.257 17:04:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:24.257 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:24.257 17:04:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.257 17:04:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.257 17:04:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.257 17:04:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:24.257 17:04:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.257 17:04:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:24.257 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:24.257 17:04:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.257 17:04:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:24.257 17:04:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:24.257 17:04:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:24.257 17:04:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:24.257 17:04:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.257 17:04:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.257 17:04:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:24.257 17:04:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:24.257 17:04:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:24.257 17:04:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:24.257 17:04:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:24.257 17:04:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:24.257 17:04:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.258 17:04:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:24.258 17:04:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:24.258 17:04:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:24.258 17:04:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:24.516 17:04:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:24.516 17:04:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:24.516 17:04:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:24.516 17:04:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:24.516 17:04:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:24.516 17:04:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:24.516 17:04:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:24.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:17:24.516 00:17:24.516 --- 10.0.0.2 ping statistics --- 00:17:24.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.516 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:24.516 17:04:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:24.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:17:24.516 00:17:24.516 --- 10.0.0.1 ping statistics --- 00:17:24.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.516 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:17:24.517 17:04:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.517 17:04:40 -- nvmf/common.sh@411 -- # return 0 00:17:24.517 17:04:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:24.517 17:04:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.517 17:04:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:24.517 17:04:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:24.517 17:04:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.517 17:04:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:24.517 17:04:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:24.517 17:04:40 -- host/fio.sh@14 -- # [[ y != y ]] 00:17:24.517 17:04:40 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:17:24.517 17:04:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:24.517 17:04:40 -- common/autotest_common.sh@10 -- # set +x 00:17:24.517 17:04:40 -- host/fio.sh@22 -- # nvmfpid=1726793 00:17:24.517 17:04:40 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:24.517 17:04:40 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:24.517 17:04:40 -- host/fio.sh@26 -- # waitforlisten 1726793 00:17:24.517 17:04:40 -- common/autotest_common.sh@817 -- # '[' -z 1726793 ']' 00:17:24.517 17:04:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.517 17:04:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:24.517 17:04:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.517 17:04:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:24.517 17:04:40 -- common/autotest_common.sh@10 -- # set +x 00:17:24.517 [2024-04-18 17:04:40.153321] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:17:24.517 [2024-04-18 17:04:40.153437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.517 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.776 [2024-04-18 17:04:40.229702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.776 [2024-04-18 17:04:40.349952] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.776 [2024-04-18 17:04:40.350012] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.776 [2024-04-18 17:04:40.350029] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.776 [2024-04-18 17:04:40.350042] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.776 [2024-04-18 17:04:40.350054] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.776 [2024-04-18 17:04:40.350116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.776 [2024-04-18 17:04:40.350173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.776 [2024-04-18 17:04:40.350209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.776 [2024-04-18 17:04:40.350212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.035 17:04:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:25.035 17:04:40 -- common/autotest_common.sh@850 -- # return 0 00:17:25.035 17:04:40 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.035 17:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.035 17:04:40 -- common/autotest_common.sh@10 -- # set +x 00:17:25.035 [2024-04-18 17:04:40.490071] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.035 17:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.035 17:04:40 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:17:25.035 17:04:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:25.035 17:04:40 -- common/autotest_common.sh@10 -- # set +x 00:17:25.035 17:04:40 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:25.035 17:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.035 17:04:40 -- common/autotest_common.sh@10 -- # set +x 00:17:25.035 Malloc1 00:17:25.035 17:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.035 17:04:40 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:25.035 17:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.035 17:04:40 -- common/autotest_common.sh@10 -- # set +x 00:17:25.035 17:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.035 17:04:40 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.035 17:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.035 17:04:40 -- common/autotest_common.sh@10 -- # set +x 00:17:25.035 17:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.035 17:04:40 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.035 17:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.035 17:04:40 -- common/autotest_common.sh@10 -- # set +x 00:17:25.035 [2024-04-18 17:04:40.568237] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.035 17:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.035 17:04:40 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:25.035 17:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.035 17:04:40 -- common/autotest_common.sh@10 -- # set +x 00:17:25.035 17:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.035 17:04:40 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:17:25.035 17:04:40 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:25.035 17:04:40 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:25.035 17:04:40 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:25.035 17:04:40 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:25.035 17:04:40 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:25.035 17:04:40 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:25.035 17:04:40 -- common/autotest_common.sh@1327 -- # shift 00:17:25.035 17:04:40 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:25.035 17:04:40 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:25.035 17:04:40 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:25.035 17:04:40 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:25.035 17:04:40 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:25.035 17:04:40 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:25.035 17:04:40 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:25.035 17:04:40 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:25.035 17:04:40 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:25.035 17:04:40 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:25.035 17:04:40 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:25.035 17:04:40 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:25.035 17:04:40 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:25.035 17:04:40 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:25.035 17:04:40 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:25.293 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:25.293 fio-3.35 00:17:25.293 Starting 1 thread 00:17:25.293 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.821 00:17:27.821 test: (groupid=0, jobs=1): err= 0: pid=1727012: Thu Apr 18 17:04:43 2024 00:17:27.821 read: IOPS=8891, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2006msec) 00:17:27.821 slat (nsec): min=1962, max=110396, avg=2414.50, stdev=1452.31 00:17:27.821 clat (usec): min=2408, max=14069, avg=7916.74, stdev=664.99 00:17:27.821 lat (usec): min=2430, max=14071, avg=7919.15, stdev=664.92 00:17:27.821 clat percentiles (usec): 00:17:27.821 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:17:27.821 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8094], 00:17:27.821 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:17:27.821 | 99.00th=[ 9241], 99.50th=[ 9634], 99.90th=[12518], 99.95th=[13304], 00:17:27.821 | 99.99th=[14091] 00:17:27.821 bw ( KiB/s): min=34624, max=36136, per=99.88%, avg=35524.00, stdev=642.94, samples=4 00:17:27.821 iops : min= 8656, max= 9034, avg=8881.00, stdev=160.74, samples=4 00:17:27.821 write: IOPS=8905, BW=34.8MiB/s (36.5MB/s)(69.8MiB/2006msec); 0 zone resets 00:17:27.821 slat (nsec): min=2062, max=96345, avg=2580.68, stdev=1261.42 00:17:27.821 clat (usec): min=1076, max=12532, avg=6429.65, stdev=541.32 00:17:27.821 lat (usec): min=1083, max=12534, avg=6432.23, stdev=541.28 00:17:27.821 clat percentiles (usec): 00:17:27.821 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:17:27.821 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:17:27.821 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7177], 00:17:27.821 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[10159], 99.95th=[10945], 00:17:27.821 | 99.99th=[12387] 00:17:27.821 bw ( KiB/s): min=35456, max=35904, per=100.00%, avg=35622.00, stdev=206.49, samples=4 00:17:27.821 iops : min= 8864, max= 8976, avg=8905.50, stdev=51.62, samples=4 00:17:27.821 lat (msec) : 2=0.03%, 4=0.13%, 10=99.66%, 20=0.18% 00:17:27.821 cpu : usr=62.09%, sys=33.97%, ctx=86, majf=0, minf=5 00:17:27.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:27.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.821 issued rwts: total=17836,17864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.821 00:17:27.821 Run status group 0 (all jobs): 00:17:27.821 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.1MB), run=2006-2006msec 00:17:27.821 WRITE: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.8MiB (73.2MB), run=2006-2006msec 00:17:27.821 17:04:43 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:27.821 17:04:43 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:27.821 17:04:43 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:27.821 17:04:43 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:27.821 17:04:43 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:27.821 17:04:43 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:27.821 17:04:43 -- common/autotest_common.sh@1327 -- # shift 00:17:27.821 17:04:43 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:27.821 17:04:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:27.821 17:04:43 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:27.821 17:04:43 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:27.821 17:04:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:27.821 17:04:43 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:27.821 17:04:43 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:27.821 17:04:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:27.821 17:04:43 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:27.821 17:04:43 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:27.821 17:04:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:27.821 17:04:43 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:27.821 17:04:43 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:27.821 17:04:43 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:27.821 17:04:43 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:27.821 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:27.821 fio-3.35 00:17:27.821 Starting 1 thread 00:17:27.821 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.351 00:17:30.351 test: (groupid=0, jobs=1): err= 0: pid=1727349: Thu Apr 18 17:04:45 2024 00:17:30.351 read: IOPS=8321, BW=130MiB/s (136MB/s)(261MiB/2007msec) 00:17:30.351 slat (nsec): min=2799, max=91073, avg=3590.93, stdev=1527.11 00:17:30.351 clat (usec): min=2480, max=17020, avg=8856.77, stdev=1965.95 00:17:30.351 lat (usec): min=2483, max=17024, avg=8860.36, stdev=1965.99 00:17:30.351 clat percentiles (usec): 00:17:30.351 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7177], 00:17:30.351 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9241], 00:17:30.351 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11338], 95.00th=[12125], 00:17:30.351 | 99.00th=[14353], 99.50th=[14877], 99.90th=[15664], 99.95th=[15795], 00:17:30.351 | 99.99th=[16909] 00:17:30.351 bw ( KiB/s): min=59136, max=77728, per=51.77%, avg=68928.00, stdev=10141.63, samples=4 00:17:30.351 iops : min= 3696, max= 4858, avg=4308.00, stdev=633.85, samples=4 00:17:30.351 write: IOPS=4939, BW=77.2MiB/s (80.9MB/s)(141MiB/1831msec); 0 zone resets 00:17:30.351 slat (usec): min=30, max=132, avg=33.28, stdev= 4.80 00:17:30.351 clat (usec): min=4484, max=19433, avg=11348.42, stdev=2002.39 00:17:30.351 lat (usec): min=4522, max=19464, avg=11381.70, stdev=2002.71 00:17:30.351 clat percentiles (usec): 00:17:30.351 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9765], 00:17:30.351 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:17:30.351 | 70.00th=[12256], 80.00th=[13042], 90.00th=[14091], 95.00th=[15008], 00:17:30.351 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18744], 99.95th=[19006], 00:17:30.351 | 99.99th=[19530] 00:17:30.351 bw ( KiB/s): min=63456, max=80032, per=90.79%, avg=71752.00, stdev=9396.29, samples=4 00:17:30.351 iops : min= 3966, max= 5002, avg=4484.50, stdev=587.27, samples=4 00:17:30.351 lat (msec) : 4=0.12%, 10=56.75%, 20=43.13% 00:17:30.351 cpu : usr=75.12%, sys=22.58%, ctx=34, majf=0, minf=1 00:17:30.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:30.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:30.351 issued rwts: total=16702,9044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:30.351 00:17:30.351 Run status group 0 (all jobs): 00:17:30.351 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=261MiB (274MB), run=2007-2007msec 00:17:30.351 WRITE: bw=77.2MiB/s (80.9MB/s), 77.2MiB/s-77.2MiB/s (80.9MB/s-80.9MB/s), io=141MiB (148MB), run=1831-1831msec 00:17:30.351 17:04:45 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.351 17:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:30.351 17:04:45 -- common/autotest_common.sh@10 -- # set +x 00:17:30.351 17:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:30.351 17:04:45 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:17:30.351 17:04:45 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:17:30.351 17:04:45 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:17:30.351 17:04:45 -- host/fio.sh@84 -- # nvmftestfini 00:17:30.351 17:04:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:30.351 17:04:45 -- nvmf/common.sh@117 -- # sync 00:17:30.351 17:04:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.351 17:04:45 -- nvmf/common.sh@120 -- # set +e 00:17:30.351 17:04:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.351 17:04:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.351 rmmod nvme_tcp 00:17:30.351 rmmod nvme_fabrics 00:17:30.351 rmmod nvme_keyring 00:17:30.351 17:04:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.351 17:04:45 -- nvmf/common.sh@124 -- # set -e 00:17:30.351 17:04:45 -- nvmf/common.sh@125 -- # return 0 00:17:30.351 17:04:45 -- nvmf/common.sh@478 -- # '[' -n 1726793 ']' 00:17:30.351 17:04:45 -- nvmf/common.sh@479 -- # killprocess 1726793 00:17:30.351 17:04:45 -- common/autotest_common.sh@936 -- # '[' -z 1726793 ']' 00:17:30.351 17:04:45 -- common/autotest_common.sh@940 -- # kill -0 1726793 00:17:30.351 17:04:45 -- common/autotest_common.sh@941 -- # uname 00:17:30.351 17:04:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.351 17:04:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1726793 00:17:30.351 17:04:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:30.351 17:04:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:30.351 17:04:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1726793' 00:17:30.351 killing process with pid 1726793 00:17:30.351 17:04:45 -- common/autotest_common.sh@955 -- # kill 1726793 00:17:30.351 17:04:45 -- common/autotest_common.sh@960 -- # wait 1726793 00:17:30.611 17:04:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:30.611 17:04:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:30.611 17:04:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:30.611 17:04:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.611 17:04:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.611 17:04:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.611 17:04:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.611 17:04:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.516 17:04:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:32.516 00:17:32.516 real 0m10.311s 00:17:32.516 user 0m26.733s 00:17:32.516 sys 0m3.721s 00:17:32.516 17:04:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:32.516 17:04:48 -- common/autotest_common.sh@10 -- # set +x 00:17:32.516 ************************************ 00:17:32.516 END TEST nvmf_fio_host 00:17:32.516 ************************************ 00:17:32.516 17:04:48 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:32.516 17:04:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:32.516 17:04:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.516 17:04:48 -- common/autotest_common.sh@10 -- # set +x 00:17:32.774 ************************************ 00:17:32.774 START TEST nvmf_failover 00:17:32.774 ************************************ 00:17:32.774 17:04:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:32.774 * Looking for test storage... 00:17:32.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:32.774 17:04:48 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.774 17:04:48 -- nvmf/common.sh@7 -- # uname -s 00:17:32.774 17:04:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.774 17:04:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.774 17:04:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.774 17:04:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.774 17:04:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.774 17:04:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.774 17:04:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.774 17:04:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.774 17:04:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.774 17:04:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.774 17:04:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.774 17:04:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.774 17:04:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.774 17:04:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.774 17:04:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.774 17:04:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.774 17:04:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.774 17:04:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.774 17:04:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.774 17:04:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.774 17:04:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.774 17:04:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.774 17:04:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.774 17:04:48 -- paths/export.sh@5 -- # export PATH 00:17:32.774 17:04:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.774 17:04:48 -- nvmf/common.sh@47 -- # : 0 00:17:32.774 17:04:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.774 17:04:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.774 17:04:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.774 17:04:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.774 17:04:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.774 17:04:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.774 17:04:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.774 17:04:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.774 17:04:48 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:32.774 17:04:48 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:32.774 17:04:48 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.774 17:04:48 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.774 17:04:48 -- host/failover.sh@18 -- # nvmftestinit 00:17:32.774 17:04:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:32.774 17:04:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.774 17:04:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:32.774 17:04:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:32.774 17:04:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:32.774 17:04:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.774 17:04:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.774 17:04:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.774 17:04:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:32.774 17:04:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:32.774 17:04:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:32.774 17:04:48 -- common/autotest_common.sh@10 -- # set +x 00:17:35.305 17:04:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:35.305 17:04:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:35.305 17:04:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:35.305 17:04:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:35.305 17:04:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:35.305 17:04:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:35.305 17:04:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:35.305 17:04:50 -- nvmf/common.sh@295 -- # net_devs=() 00:17:35.305 17:04:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:35.305 17:04:50 -- nvmf/common.sh@296 -- # e810=() 00:17:35.305 17:04:50 -- nvmf/common.sh@296 -- # local -ga e810 00:17:35.305 17:04:50 -- nvmf/common.sh@297 -- # x722=() 00:17:35.305 17:04:50 -- nvmf/common.sh@297 -- # local -ga x722 00:17:35.305 17:04:50 -- nvmf/common.sh@298 -- # mlx=() 00:17:35.305 17:04:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:35.305 17:04:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.305 17:04:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.305 17:04:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.305 17:04:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.305 17:04:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.305 17:04:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.305 17:04:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.305 17:04:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.305 17:04:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.305 17:04:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.305 17:04:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.305 17:04:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:35.305 17:04:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:35.305 17:04:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:35.305 17:04:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.305 17:04:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:35.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:35.305 17:04:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.305 17:04:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:35.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:35.305 17:04:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:35.305 17:04:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.305 17:04:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.305 17:04:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.305 17:04:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.305 17:04:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:35.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:35.305 17:04:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.305 17:04:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.305 17:04:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.305 17:04:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.305 17:04:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.305 17:04:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:35.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:35.305 17:04:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.305 17:04:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:35.305 17:04:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:35.305 17:04:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:35.305 17:04:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.305 17:04:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.305 17:04:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.305 17:04:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:35.305 17:04:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.305 17:04:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.305 17:04:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:35.305 17:04:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.305 17:04:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.305 17:04:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:35.305 17:04:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:35.305 17:04:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.305 17:04:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.305 17:04:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.305 17:04:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.305 17:04:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:35.305 17:04:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.305 17:04:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.305 17:04:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.305 17:04:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:35.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:17:35.305 00:17:35.305 --- 10.0.0.2 ping statistics --- 00:17:35.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.305 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:17:35.305 17:04:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:17:35.305 00:17:35.305 --- 10.0.0.1 ping statistics --- 00:17:35.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.305 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:35.305 17:04:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.305 17:04:50 -- nvmf/common.sh@411 -- # return 0 00:17:35.305 17:04:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:35.305 17:04:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.305 17:04:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:35.305 17:04:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.305 17:04:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:35.305 17:04:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:35.305 17:04:50 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:35.305 17:04:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:35.305 17:04:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.305 17:04:50 -- common/autotest_common.sh@10 -- # set +x 00:17:35.305 17:04:50 -- nvmf/common.sh@470 -- # nvmfpid=1729666 00:17:35.305 17:04:50 -- nvmf/common.sh@471 -- # waitforlisten 1729666 00:17:35.305 17:04:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:35.305 17:04:50 -- common/autotest_common.sh@817 -- # '[' -z 1729666 ']' 00:17:35.305 17:04:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.305 17:04:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.305 17:04:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.305 17:04:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.305 17:04:50 -- common/autotest_common.sh@10 -- # set +x 00:17:35.305 [2024-04-18 17:04:50.597989] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:17:35.306 [2024-04-18 17:04:50.598060] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.306 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.306 [2024-04-18 17:04:50.661150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:35.306 [2024-04-18 17:04:50.764945] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.306 [2024-04-18 17:04:50.765004] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.306 [2024-04-18 17:04:50.765028] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.306 [2024-04-18 17:04:50.765039] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.306 [2024-04-18 17:04:50.765049] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.306 [2024-04-18 17:04:50.765179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.306 [2024-04-18 17:04:50.765238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.306 [2024-04-18 17:04:50.765241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.306 17:04:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:35.306 17:04:50 -- common/autotest_common.sh@850 -- # return 0 00:17:35.306 17:04:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:35.306 17:04:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:35.306 17:04:50 -- common/autotest_common.sh@10 -- # set +x 00:17:35.306 17:04:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.306 17:04:50 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:35.562 [2024-04-18 17:04:51.101554] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.562 17:04:51 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:35.819 Malloc0 00:17:35.819 17:04:51 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.077 17:04:51 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.335 17:04:51 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.593 [2024-04-18 17:04:52.094333] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.593 17:04:52 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:36.851 [2024-04-18 17:04:52.331141] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:36.851 17:04:52 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:37.108 [2024-04-18 17:04:52.567927] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:37.108 17:04:52 -- host/failover.sh@31 -- # bdevperf_pid=1729838 00:17:37.108 17:04:52 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:37.108 17:04:52 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:37.108 17:04:52 -- host/failover.sh@34 -- # waitforlisten 1729838 /var/tmp/bdevperf.sock 00:17:37.108 17:04:52 -- common/autotest_common.sh@817 -- # '[' -z 1729838 ']' 00:17:37.108 17:04:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.108 17:04:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:37.108 17:04:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.108 17:04:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:37.109 17:04:52 -- common/autotest_common.sh@10 -- # set +x 00:17:37.367 17:04:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:37.367 17:04:52 -- common/autotest_common.sh@850 -- # return 0 00:17:37.367 17:04:52 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:37.644 NVMe0n1 00:17:37.644 17:04:53 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:37.913 00:17:38.172 17:04:53 -- host/failover.sh@39 -- # run_test_pid=1729970 00:17:38.172 17:04:53 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:38.172 17:04:53 -- host/failover.sh@41 -- # sleep 1 00:17:39.107 17:04:54 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.366 [2024-04-18 17:04:54.893269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.366 [2024-04-18 17:04:54.893869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.893882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.893893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.893905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.893918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.893930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.893942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.893954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.893969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.893981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.893993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 [2024-04-18 17:04:54.894268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2370 is same with the state(5) to be set 00:17:39.367 17:04:54 -- host/failover.sh@45 -- # sleep 3 00:17:42.657 17:04:57 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:42.657 00:17:42.657 17:04:58 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:42.916 [2024-04-18 17:04:58.577347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2b80 is same with the state(5) to be set 00:17:42.916 [2024-04-18 17:04:58.577441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2b80 is same with the state(5) to be set 00:17:42.916 [2024-04-18 17:04:58.577458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2b80 is same with the state(5) to be set 00:17:42.916 [2024-04-18 17:04:58.577473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2b80 is same with the state(5) to be set 00:17:42.916 [2024-04-18 17:04:58.577485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2b80 is same with the state(5) to be set 00:17:42.916 [2024-04-18 17:04:58.577499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2b80 is same with the state(5) to be set 00:17:42.916 [2024-04-18 17:04:58.577512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2b80 is same with the state(5) to be set 00:17:42.916 [2024-04-18 17:04:58.577525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2b80 is same with the state(5) to be set 00:17:42.916 [2024-04-18 17:04:58.577539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2b80 is same with the state(5) to be set 00:17:42.916 17:04:58 -- host/failover.sh@50 -- # sleep 3 00:17:46.204 17:05:01 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.204 [2024-04-18 17:05:01.854982] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.204 17:05:01 -- host/failover.sh@55 -- # sleep 1 00:17:47.600 17:05:02 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:47.600 [2024-04-18 17:05:03.144337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x579190 is same with the state(5) to be set 00:17:47.600 [2024-04-18 17:05:03.144418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x579190 is same with the state(5) to be set 00:17:47.600 [2024-04-18 17:05:03.144435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x579190 is same with the state(5) to be set 00:17:47.600 [2024-04-18 17:05:03.144449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x579190 is same with the state(5) to be set 00:17:47.600 [2024-04-18 17:05:03.144463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x579190 is same with the state(5) to be set 00:17:47.600 [2024-04-18 17:05:03.144476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x579190 is same with the state(5) to be set 00:17:47.600 [2024-04-18 17:05:03.144489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x579190 is same with the state(5) to be set 00:17:47.600 [2024-04-18 17:05:03.144502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x579190 is same with the state(5) to be set 00:17:47.600 17:05:03 -- host/failover.sh@59 -- # wait 1729970 00:17:54.173 0 00:17:54.173 17:05:08 -- host/failover.sh@61 -- # killprocess 1729838 00:17:54.173 17:05:08 -- common/autotest_common.sh@936 -- # '[' -z 1729838 ']' 00:17:54.173 17:05:08 -- common/autotest_common.sh@940 -- # kill -0 1729838 00:17:54.173 17:05:08 -- common/autotest_common.sh@941 -- # uname 00:17:54.173 17:05:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:54.173 17:05:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1729838 00:17:54.173 17:05:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:54.173 17:05:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:54.173 17:05:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1729838' 00:17:54.173 killing process with pid 1729838 00:17:54.173 17:05:08 -- common/autotest_common.sh@955 -- # kill 1729838 00:17:54.173 17:05:08 -- common/autotest_common.sh@960 -- # wait 1729838 00:17:54.173 17:05:09 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:54.173 [2024-04-18 17:04:52.629219] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:17:54.173 [2024-04-18 17:04:52.629311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729838 ] 00:17:54.173 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.173 [2024-04-18 17:04:52.687801] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.173 [2024-04-18 17:04:52.794785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.173 Running I/O for 15 seconds... 00:17:54.173 [2024-04-18 17:04:54.895044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.895973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.895986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.896013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.896039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.173 [2024-04-18 17:04:54.896067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.173 [2024-04-18 17:04:54.896511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.173 [2024-04-18 17:04:54.896527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.896980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.896995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.174 [2024-04-18 17:04:54.897755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.174 [2024-04-18 17:04:54.897782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.174 [2024-04-18 17:04:54.897809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.174 [2024-04-18 17:04:54.897837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.174 [2024-04-18 17:04:54.897863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.174 [2024-04-18 17:04:54.897891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.174 [2024-04-18 17:04:54.897918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.897973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.897987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.174 [2024-04-18 17:04:54.898462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.174 [2024-04-18 17:04:54.898475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:54.898504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:54.898532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:54.898560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:54.898588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:54.898616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:54.898644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:54.898687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:54.898716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:54.898745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:54.898776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:54.898805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:54.898833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:54.175 [2024-04-18 17:04:54.898878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:54.175 [2024-04-18 17:04:54.898889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81384 len:8 PRP1 0x0 PRP2 0x0 00:17:54.175 [2024-04-18 17:04:54.898909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.898969] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22c7f10 was disconnected and freed. reset controller. 00:17:54.175 [2024-04-18 17:04:54.898989] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:54.175 [2024-04-18 17:04:54.899020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.175 [2024-04-18 17:04:54.899054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.899069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.175 [2024-04-18 17:04:54.899082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.899096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.175 [2024-04-18 17:04:54.899108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.899122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.175 [2024-04-18 17:04:54.899134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:54.899148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:54.175 [2024-04-18 17:04:54.902412] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:54.175 [2024-04-18 17:04:54.902450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a93e0 (9): Bad file descriptor 00:17:54.175 [2024-04-18 17:04:55.065668] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:54.175 [2024-04-18 17:04:58.577654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.577705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.577744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.577759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.577789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.577804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.577820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.577833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.577848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.577862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.577877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.577905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.577920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.577934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.577948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.577975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.577991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.175 [2024-04-18 17:04:58.578614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.175 [2024-04-18 17:04:58.578642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.175 [2024-04-18 17:04:58.578669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.175 [2024-04-18 17:04:58.578711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.175 [2024-04-18 17:04:58.578739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.175 [2024-04-18 17:04:58.578782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.175 [2024-04-18 17:04:58.578797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.578810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.578824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.578837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.578852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.578865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.578879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.578892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.578907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.578920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.578938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.578952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.578966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.578980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.578995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.176 [2024-04-18 17:04:58.579091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.579982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.579997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-18 17:04:58.580841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.176 [2024-04-18 17:04:58.580856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.580869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.580883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.580896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.580911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.580924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.580938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.580952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.580966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.580979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.580994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:04:58.581262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:04:58.581289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:04:58.581509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b58d0 is same with the state(5) to be set 00:17:54.177 [2024-04-18 17:04:58.581546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:54.177 [2024-04-18 17:04:58.581565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:54.177 [2024-04-18 17:04:58.581578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111024 len:8 PRP1 0x0 PRP2 0x0 00:17:54.177 [2024-04-18 17:04:58.581590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581650] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22b58d0 was disconnected and freed. reset controller. 00:17:54.177 [2024-04-18 17:04:58.581670] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:54.177 [2024-04-18 17:04:58.581717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.177 [2024-04-18 17:04:58.581736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.177 [2024-04-18 17:04:58.581777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.177 [2024-04-18 17:04:58.581804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.177 [2024-04-18 17:04:58.581829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:04:58.581842] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:54.177 [2024-04-18 17:04:58.585154] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:54.177 [2024-04-18 17:04:58.585191] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a93e0 (9): Bad file descriptor 00:17:54.177 [2024-04-18 17:04:58.734123] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:54.177 [2024-04-18 17:05:03.144650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.144700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.144728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.144745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.144777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.144791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.144806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.144820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.144835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.144848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.144873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.144903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.144918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.144932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.144945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.144958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.144973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.144986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-18 17:05:03.145556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.177 [2024-04-18 17:05:03.145863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.177 [2024-04-18 17:05:03.145876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.145890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.145903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.145918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.145931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.145946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.145960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.145974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.145987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.178 [2024-04-18 17:05:03.146352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.146977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.146990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.178 [2024-04-18 17:05:03.147642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.178 [2024-04-18 17:05:03.147656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.147680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.147694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.147725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.147738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.147753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.147767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.147782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.147796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.147811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.147824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.147838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.147852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.147866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.147880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.147894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.147908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.147922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.147939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.147955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.147968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.147983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.147996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:54.179 [2024-04-18 17:05:03.148342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.179 [2024-04-18 17:05:03.148581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b58d0 is same with the state(5) to be set 00:17:54.179 [2024-04-18 17:05:03.148611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:54.179 [2024-04-18 17:05:03.148622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:54.179 [2024-04-18 17:05:03.148634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65384 len:8 PRP1 0x0 PRP2 0x0 00:17:54.179 [2024-04-18 17:05:03.148647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148734] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22b58d0 was disconnected and freed. reset controller. 00:17:54.179 [2024-04-18 17:05:03.148753] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:54.179 [2024-04-18 17:05:03.148801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.179 [2024-04-18 17:05:03.148821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.179 [2024-04-18 17:05:03.148878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.179 [2024-04-18 17:05:03.148911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.179 [2024-04-18 17:05:03.148938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.179 [2024-04-18 17:05:03.148951] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:54.179 [2024-04-18 17:05:03.152255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:54.179 [2024-04-18 17:05:03.152293] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a93e0 (9): Bad file descriptor 00:17:54.179 [2024-04-18 17:05:03.221791] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:54.179 00:17:54.179 Latency(us) 00:17:54.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.179 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:54.179 Verification LBA range: start 0x0 length 0x4000 00:17:54.179 NVMe0n1 : 15.01 8340.85 32.58 998.98 0.00 13676.52 561.30 17282.09 00:17:54.179 =================================================================================================================== 00:17:54.179 Total : 8340.85 32.58 998.98 0.00 13676.52 561.30 17282.09 00:17:54.179 Received shutdown signal, test time was about 15.000000 seconds 00:17:54.179 00:17:54.179 Latency(us) 00:17:54.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.179 =================================================================================================================== 00:17:54.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.179 17:05:09 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:54.179 17:05:09 -- host/failover.sh@65 -- # count=3 00:17:54.179 17:05:09 -- host/failover.sh@67 -- # (( count != 3 )) 00:17:54.179 17:05:09 -- host/failover.sh@73 -- # bdevperf_pid=1731817 00:17:54.179 17:05:09 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:54.179 17:05:09 -- host/failover.sh@75 -- # waitforlisten 1731817 /var/tmp/bdevperf.sock 00:17:54.179 17:05:09 -- common/autotest_common.sh@817 -- # '[' -z 1731817 ']' 00:17:54.179 17:05:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.179 17:05:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:54.179 17:05:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.179 17:05:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:54.179 17:05:09 -- common/autotest_common.sh@10 -- # set +x 00:17:54.179 17:05:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:54.179 17:05:09 -- common/autotest_common.sh@850 -- # return 0 00:17:54.179 17:05:09 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:54.179 [2024-04-18 17:05:09.606994] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:54.179 17:05:09 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:54.448 [2024-04-18 17:05:09.875714] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:54.448 17:05:09 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:54.705 NVMe0n1 00:17:54.705 17:05:10 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:55.273 00:17:55.273 17:05:10 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:55.554 00:17:55.554 17:05:11 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:55.554 17:05:11 -- host/failover.sh@82 -- # grep -q NVMe0 00:17:55.831 17:05:11 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:56.089 17:05:11 -- host/failover.sh@87 -- # sleep 3 00:17:59.374 17:05:14 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:59.374 17:05:14 -- host/failover.sh@88 -- # grep -q NVMe0 00:17:59.374 17:05:14 -- host/failover.sh@90 -- # run_test_pid=1732493 00:17:59.374 17:05:14 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:59.374 17:05:14 -- host/failover.sh@92 -- # wait 1732493 00:18:00.751 0 00:18:00.751 17:05:16 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:00.751 [2024-04-18 17:05:09.104567] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:18:00.751 [2024-04-18 17:05:09.104662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731817 ] 00:18:00.751 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.751 [2024-04-18 17:05:09.163450] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.751 [2024-04-18 17:05:09.265805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.751 [2024-04-18 17:05:11.669064] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:00.751 [2024-04-18 17:05:11.669143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:00.751 [2024-04-18 17:05:11.669164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.751 [2024-04-18 17:05:11.669180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:00.751 [2024-04-18 17:05:11.669193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.751 [2024-04-18 17:05:11.669207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:00.751 [2024-04-18 17:05:11.669221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.751 [2024-04-18 17:05:11.669234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:00.751 [2024-04-18 17:05:11.669247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.751 [2024-04-18 17:05:11.669277] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:00.751 [2024-04-18 17:05:11.669324] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:00.751 [2024-04-18 17:05:11.669366] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e3e0 (9): Bad file descriptor 00:18:00.751 [2024-04-18 17:05:11.771532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:00.751 Running I/O for 1 seconds... 00:18:00.751 00:18:00.751 Latency(us) 00:18:00.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.752 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:00.752 Verification LBA range: start 0x0 length 0x4000 00:18:00.752 NVMe0n1 : 1.01 8664.17 33.84 0.00 0.00 14711.46 3228.25 12621.75 00:18:00.752 =================================================================================================================== 00:18:00.752 Total : 8664.17 33.84 0.00 0.00 14711.46 3228.25 12621.75 00:18:00.752 17:05:16 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:00.752 17:05:16 -- host/failover.sh@95 -- # grep -q NVMe0 00:18:00.752 17:05:16 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:01.009 17:05:16 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:01.009 17:05:16 -- host/failover.sh@99 -- # grep -q NVMe0 00:18:01.267 17:05:16 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:01.526 17:05:17 -- host/failover.sh@101 -- # sleep 3 00:18:04.816 17:05:20 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:04.816 17:05:20 -- host/failover.sh@103 -- # grep -q NVMe0 00:18:04.816 17:05:20 -- host/failover.sh@108 -- # killprocess 1731817 00:18:04.816 17:05:20 -- common/autotest_common.sh@936 -- # '[' -z 1731817 ']' 00:18:04.816 17:05:20 -- common/autotest_common.sh@940 -- # kill -0 1731817 00:18:04.816 17:05:20 -- common/autotest_common.sh@941 -- # uname 00:18:04.816 17:05:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.816 17:05:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1731817 00:18:04.816 17:05:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:04.816 17:05:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:04.816 17:05:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1731817' 00:18:04.816 killing process with pid 1731817 00:18:04.816 17:05:20 -- common/autotest_common.sh@955 -- # kill 1731817 00:18:04.816 17:05:20 -- common/autotest_common.sh@960 -- # wait 1731817 00:18:05.075 17:05:20 -- host/failover.sh@110 -- # sync 00:18:05.075 17:05:20 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.335 17:05:20 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:05.335 17:05:20 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:05.335 17:05:20 -- host/failover.sh@116 -- # nvmftestfini 00:18:05.335 17:05:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:05.335 17:05:20 -- nvmf/common.sh@117 -- # sync 00:18:05.335 17:05:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:05.335 17:05:20 -- nvmf/common.sh@120 -- # set +e 00:18:05.335 17:05:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:05.335 17:05:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:05.335 rmmod nvme_tcp 00:18:05.335 rmmod nvme_fabrics 00:18:05.335 rmmod nvme_keyring 00:18:05.335 17:05:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:05.335 17:05:20 -- nvmf/common.sh@124 -- # set -e 00:18:05.335 17:05:20 -- nvmf/common.sh@125 -- # return 0 00:18:05.335 17:05:20 -- nvmf/common.sh@478 -- # '[' -n 1729666 ']' 00:18:05.335 17:05:20 -- nvmf/common.sh@479 -- # killprocess 1729666 00:18:05.335 17:05:20 -- common/autotest_common.sh@936 -- # '[' -z 1729666 ']' 00:18:05.335 17:05:20 -- common/autotest_common.sh@940 -- # kill -0 1729666 00:18:05.335 17:05:20 -- common/autotest_common.sh@941 -- # uname 00:18:05.335 17:05:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:05.335 17:05:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1729666 00:18:05.335 17:05:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:05.335 17:05:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:05.335 17:05:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1729666' 00:18:05.335 killing process with pid 1729666 00:18:05.335 17:05:20 -- common/autotest_common.sh@955 -- # kill 1729666 00:18:05.335 17:05:20 -- common/autotest_common.sh@960 -- # wait 1729666 00:18:05.595 17:05:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:05.595 17:05:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:05.595 17:05:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:05.856 17:05:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.856 17:05:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:05.856 17:05:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.856 17:05:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.856 17:05:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.764 17:05:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:07.764 00:18:07.764 real 0m35.062s 00:18:07.764 user 2m3.508s 00:18:07.764 sys 0m5.723s 00:18:07.764 17:05:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:07.764 17:05:23 -- common/autotest_common.sh@10 -- # set +x 00:18:07.764 ************************************ 00:18:07.764 END TEST nvmf_failover 00:18:07.764 ************************************ 00:18:07.764 17:05:23 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:07.764 17:05:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:07.764 17:05:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:07.764 17:05:23 -- common/autotest_common.sh@10 -- # set +x 00:18:07.764 ************************************ 00:18:07.764 START TEST nvmf_discovery 00:18:07.764 ************************************ 00:18:07.764 17:05:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:08.022 * Looking for test storage... 00:18:08.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:08.022 17:05:23 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.022 17:05:23 -- nvmf/common.sh@7 -- # uname -s 00:18:08.022 17:05:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.022 17:05:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.022 17:05:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.022 17:05:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.022 17:05:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.022 17:05:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.022 17:05:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.022 17:05:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.022 17:05:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.022 17:05:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.022 17:05:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.022 17:05:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.023 17:05:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.023 17:05:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.023 17:05:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.023 17:05:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.023 17:05:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.023 17:05:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.023 17:05:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.023 17:05:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.023 17:05:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.023 17:05:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.023 17:05:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.023 17:05:23 -- paths/export.sh@5 -- # export PATH 00:18:08.023 17:05:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.023 17:05:23 -- nvmf/common.sh@47 -- # : 0 00:18:08.023 17:05:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.023 17:05:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.023 17:05:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.023 17:05:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.023 17:05:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.023 17:05:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.023 17:05:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.023 17:05:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.023 17:05:23 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:08.023 17:05:23 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:08.023 17:05:23 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:08.023 17:05:23 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:08.023 17:05:23 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:08.023 17:05:23 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:08.023 17:05:23 -- host/discovery.sh@25 -- # nvmftestinit 00:18:08.023 17:05:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:08.023 17:05:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.023 17:05:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:08.023 17:05:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:08.023 17:05:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:08.023 17:05:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.023 17:05:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.023 17:05:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.023 17:05:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:08.023 17:05:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:08.023 17:05:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.023 17:05:23 -- common/autotest_common.sh@10 -- # set +x 00:18:09.926 17:05:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:09.926 17:05:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:09.926 17:05:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:09.926 17:05:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:09.926 17:05:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:09.926 17:05:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:09.926 17:05:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:09.926 17:05:25 -- nvmf/common.sh@295 -- # net_devs=() 00:18:09.926 17:05:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:09.926 17:05:25 -- nvmf/common.sh@296 -- # e810=() 00:18:09.926 17:05:25 -- nvmf/common.sh@296 -- # local -ga e810 00:18:09.926 17:05:25 -- nvmf/common.sh@297 -- # x722=() 00:18:09.926 17:05:25 -- nvmf/common.sh@297 -- # local -ga x722 00:18:09.926 17:05:25 -- nvmf/common.sh@298 -- # mlx=() 00:18:09.926 17:05:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:09.926 17:05:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.926 17:05:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.926 17:05:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.926 17:05:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.926 17:05:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.926 17:05:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.926 17:05:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.926 17:05:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.926 17:05:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.926 17:05:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.926 17:05:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.926 17:05:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:09.926 17:05:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:09.926 17:05:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:09.926 17:05:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:09.926 17:05:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:09.926 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:09.926 17:05:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:09.926 17:05:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:09.926 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:09.926 17:05:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:09.926 17:05:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:09.926 17:05:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.926 17:05:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:09.926 17:05:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.926 17:05:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:09.926 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:09.926 17:05:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.926 17:05:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:09.926 17:05:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.926 17:05:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:09.926 17:05:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.926 17:05:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:09.926 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:09.926 17:05:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.926 17:05:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:09.926 17:05:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:09.926 17:05:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:09.926 17:05:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:09.926 17:05:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.926 17:05:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.926 17:05:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.926 17:05:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:09.926 17:05:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.926 17:05:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.926 17:05:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:09.926 17:05:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.926 17:05:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.926 17:05:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:09.926 17:05:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:09.926 17:05:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.926 17:05:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.185 17:05:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.185 17:05:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.185 17:05:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:10.185 17:05:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.185 17:05:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.185 17:05:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.185 17:05:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:10.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:18:10.185 00:18:10.185 --- 10.0.0.2 ping statistics --- 00:18:10.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.185 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:18:10.185 17:05:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:18:10.185 00:18:10.185 --- 10.0.0.1 ping statistics --- 00:18:10.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.185 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:18:10.185 17:05:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.185 17:05:25 -- nvmf/common.sh@411 -- # return 0 00:18:10.185 17:05:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:10.185 17:05:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.185 17:05:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:10.185 17:05:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:10.185 17:05:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.185 17:05:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:10.185 17:05:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:10.185 17:05:25 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:10.185 17:05:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:10.185 17:05:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:10.185 17:05:25 -- common/autotest_common.sh@10 -- # set +x 00:18:10.185 17:05:25 -- nvmf/common.sh@470 -- # nvmfpid=1735218 00:18:10.185 17:05:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.185 17:05:25 -- nvmf/common.sh@471 -- # waitforlisten 1735218 00:18:10.185 17:05:25 -- common/autotest_common.sh@817 -- # '[' -z 1735218 ']' 00:18:10.185 17:05:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.185 17:05:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:10.185 17:05:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.185 17:05:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:10.185 17:05:25 -- common/autotest_common.sh@10 -- # set +x 00:18:10.185 [2024-04-18 17:05:25.784767] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:18:10.185 [2024-04-18 17:05:25.784861] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.185 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.185 [2024-04-18 17:05:25.853447] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.445 [2024-04-18 17:05:25.966571] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.445 [2024-04-18 17:05:25.966639] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.445 [2024-04-18 17:05:25.966665] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.445 [2024-04-18 17:05:25.966678] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.445 [2024-04-18 17:05:25.966690] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.445 [2024-04-18 17:05:25.966728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.014 17:05:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:11.014 17:05:26 -- common/autotest_common.sh@850 -- # return 0 00:18:11.014 17:05:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:11.014 17:05:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:11.014 17:05:26 -- common/autotest_common.sh@10 -- # set +x 00:18:11.272 17:05:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.272 17:05:26 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.272 17:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.272 17:05:26 -- common/autotest_common.sh@10 -- # set +x 00:18:11.272 [2024-04-18 17:05:26.742212] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.272 17:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.272 17:05:26 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:11.272 17:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.272 17:05:26 -- common/autotest_common.sh@10 -- # set +x 00:18:11.272 [2024-04-18 17:05:26.750408] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:11.272 17:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.272 17:05:26 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:11.272 17:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.272 17:05:26 -- common/autotest_common.sh@10 -- # set +x 00:18:11.272 null0 00:18:11.272 17:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.272 17:05:26 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:11.272 17:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.272 17:05:26 -- common/autotest_common.sh@10 -- # set +x 00:18:11.272 null1 00:18:11.272 17:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.272 17:05:26 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:11.272 17:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.272 17:05:26 -- common/autotest_common.sh@10 -- # set +x 00:18:11.272 17:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.272 17:05:26 -- host/discovery.sh@45 -- # hostpid=1735370 00:18:11.272 17:05:26 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:11.272 17:05:26 -- host/discovery.sh@46 -- # waitforlisten 1735370 /tmp/host.sock 00:18:11.272 17:05:26 -- common/autotest_common.sh@817 -- # '[' -z 1735370 ']' 00:18:11.272 17:05:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:18:11.272 17:05:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:11.272 17:05:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:11.272 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:11.272 17:05:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:11.272 17:05:26 -- common/autotest_common.sh@10 -- # set +x 00:18:11.272 [2024-04-18 17:05:26.819349] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:18:11.272 [2024-04-18 17:05:26.819470] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735370 ] 00:18:11.272 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.272 [2024-04-18 17:05:26.877307] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.530 [2024-04-18 17:05:26.983443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.530 17:05:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:11.530 17:05:27 -- common/autotest_common.sh@850 -- # return 0 00:18:11.530 17:05:27 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:11.530 17:05:27 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:11.530 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.530 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.530 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.530 17:05:27 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:11.530 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.530 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.530 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.530 17:05:27 -- host/discovery.sh@72 -- # notify_id=0 00:18:11.530 17:05:27 -- host/discovery.sh@83 -- # get_subsystem_names 00:18:11.530 17:05:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:11.530 17:05:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:11.530 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.530 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.530 17:05:27 -- host/discovery.sh@59 -- # sort 00:18:11.530 17:05:27 -- host/discovery.sh@59 -- # xargs 00:18:11.530 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.530 17:05:27 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:11.530 17:05:27 -- host/discovery.sh@84 -- # get_bdev_list 00:18:11.530 17:05:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.530 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.530 17:05:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:11.530 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.530 17:05:27 -- host/discovery.sh@55 -- # sort 00:18:11.530 17:05:27 -- host/discovery.sh@55 -- # xargs 00:18:11.530 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.530 17:05:27 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:11.530 17:05:27 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:11.530 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.530 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.530 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.530 17:05:27 -- host/discovery.sh@87 -- # get_subsystem_names 00:18:11.530 17:05:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:11.530 17:05:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:11.530 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.530 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.530 17:05:27 -- host/discovery.sh@59 -- # sort 00:18:11.530 17:05:27 -- host/discovery.sh@59 -- # xargs 00:18:11.530 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.788 17:05:27 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:11.788 17:05:27 -- host/discovery.sh@88 -- # get_bdev_list 00:18:11.788 17:05:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.788 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.788 17:05:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:11.788 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 17:05:27 -- host/discovery.sh@55 -- # sort 00:18:11.788 17:05:27 -- host/discovery.sh@55 -- # xargs 00:18:11.788 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.788 17:05:27 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:11.788 17:05:27 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:11.788 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.788 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.788 17:05:27 -- host/discovery.sh@91 -- # get_subsystem_names 00:18:11.788 17:05:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:11.788 17:05:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:11.788 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.788 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 17:05:27 -- host/discovery.sh@59 -- # sort 00:18:11.788 17:05:27 -- host/discovery.sh@59 -- # xargs 00:18:11.788 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.788 17:05:27 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:11.788 17:05:27 -- host/discovery.sh@92 -- # get_bdev_list 00:18:11.788 17:05:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.788 17:05:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:11.788 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.788 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 17:05:27 -- host/discovery.sh@55 -- # sort 00:18:11.788 17:05:27 -- host/discovery.sh@55 -- # xargs 00:18:11.788 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.788 17:05:27 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:11.788 17:05:27 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:11.788 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.788 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 [2024-04-18 17:05:27.380129] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.788 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.788 17:05:27 -- host/discovery.sh@97 -- # get_subsystem_names 00:18:11.788 17:05:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:11.788 17:05:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:11.788 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.788 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 17:05:27 -- host/discovery.sh@59 -- # sort 00:18:11.788 17:05:27 -- host/discovery.sh@59 -- # xargs 00:18:11.788 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.788 17:05:27 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:11.788 17:05:27 -- host/discovery.sh@98 -- # get_bdev_list 00:18:11.788 17:05:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.788 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.788 17:05:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:11.788 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 17:05:27 -- host/discovery.sh@55 -- # sort 00:18:11.788 17:05:27 -- host/discovery.sh@55 -- # xargs 00:18:11.788 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.788 17:05:27 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:11.788 17:05:27 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:11.788 17:05:27 -- host/discovery.sh@79 -- # expected_count=0 00:18:11.788 17:05:27 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:11.788 17:05:27 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:11.788 17:05:27 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.788 17:05:27 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.788 17:05:27 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:11.788 17:05:27 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:11.788 17:05:27 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:11.788 17:05:27 -- host/discovery.sh@74 -- # jq '. | length' 00:18:11.788 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.788 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.047 17:05:27 -- host/discovery.sh@74 -- # notification_count=0 00:18:12.047 17:05:27 -- host/discovery.sh@75 -- # notify_id=0 00:18:12.047 17:05:27 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:12.047 17:05:27 -- common/autotest_common.sh@904 -- # return 0 00:18:12.048 17:05:27 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:12.048 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.048 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:12.048 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.048 17:05:27 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:12.048 17:05:27 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:12.048 17:05:27 -- common/autotest_common.sh@901 -- # local max=10 00:18:12.048 17:05:27 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:12.048 17:05:27 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:12.048 17:05:27 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:12.048 17:05:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:12.048 17:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.048 17:05:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:12.048 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:18:12.048 17:05:27 -- host/discovery.sh@59 -- # sort 00:18:12.048 17:05:27 -- host/discovery.sh@59 -- # xargs 00:18:12.048 17:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.048 17:05:27 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:18:12.048 17:05:27 -- common/autotest_common.sh@906 -- # sleep 1 00:18:12.615 [2024-04-18 17:05:28.120185] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:12.615 [2024-04-18 17:05:28.120217] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:12.615 [2024-04-18 17:05:28.120243] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:12.615 [2024-04-18 17:05:28.209563] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:12.872 [2024-04-18 17:05:28.393319] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:12.872 [2024-04-18 17:05:28.393342] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:12.872 17:05:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:12.872 17:05:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:12.872 17:05:28 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:12.872 17:05:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:12.872 17:05:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:12.872 17:05:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.872 17:05:28 -- host/discovery.sh@59 -- # sort 00:18:12.872 17:05:28 -- common/autotest_common.sh@10 -- # set +x 00:18:12.872 17:05:28 -- host/discovery.sh@59 -- # xargs 00:18:12.872 17:05:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.154 17:05:28 -- common/autotest_common.sh@904 -- # return 0 00:18:13.154 17:05:28 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:13.154 17:05:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:13.154 17:05:28 -- common/autotest_common.sh@901 -- # local max=10 00:18:13.154 17:05:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:13.154 17:05:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:13.154 17:05:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:13.154 17:05:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.154 17:05:28 -- common/autotest_common.sh@10 -- # set +x 00:18:13.154 17:05:28 -- host/discovery.sh@55 -- # sort 00:18:13.154 17:05:28 -- host/discovery.sh@55 -- # xargs 00:18:13.154 17:05:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:13.154 17:05:28 -- common/autotest_common.sh@904 -- # return 0 00:18:13.154 17:05:28 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:13.154 17:05:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:13.154 17:05:28 -- common/autotest_common.sh@901 -- # local max=10 00:18:13.154 17:05:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:13.154 17:05:28 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:13.154 17:05:28 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:13.154 17:05:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.154 17:05:28 -- common/autotest_common.sh@10 -- # set +x 00:18:13.154 17:05:28 -- host/discovery.sh@63 -- # sort -n 00:18:13.154 17:05:28 -- host/discovery.sh@63 -- # xargs 00:18:13.154 17:05:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:18:13.154 17:05:28 -- common/autotest_common.sh@904 -- # return 0 00:18:13.154 17:05:28 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:13.154 17:05:28 -- host/discovery.sh@79 -- # expected_count=1 00:18:13.154 17:05:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:13.154 17:05:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:13.154 17:05:28 -- common/autotest_common.sh@901 -- # local max=10 00:18:13.154 17:05:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:13.154 17:05:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:13.154 17:05:28 -- host/discovery.sh@74 -- # jq '. | length' 00:18:13.154 17:05:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.154 17:05:28 -- common/autotest_common.sh@10 -- # set +x 00:18:13.154 17:05:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.154 17:05:28 -- host/discovery.sh@74 -- # notification_count=1 00:18:13.154 17:05:28 -- host/discovery.sh@75 -- # notify_id=1 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:13.154 17:05:28 -- common/autotest_common.sh@904 -- # return 0 00:18:13.154 17:05:28 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:13.154 17:05:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.154 17:05:28 -- common/autotest_common.sh@10 -- # set +x 00:18:13.154 17:05:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.154 17:05:28 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:13.154 17:05:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:13.154 17:05:28 -- common/autotest_common.sh@901 -- # local max=10 00:18:13.154 17:05:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:13.154 17:05:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:13.154 17:05:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.154 17:05:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:13.154 17:05:28 -- common/autotest_common.sh@10 -- # set +x 00:18:13.154 17:05:28 -- host/discovery.sh@55 -- # sort 00:18:13.154 17:05:28 -- host/discovery.sh@55 -- # xargs 00:18:13.154 17:05:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:13.154 17:05:28 -- common/autotest_common.sh@904 -- # return 0 00:18:13.154 17:05:28 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:13.154 17:05:28 -- host/discovery.sh@79 -- # expected_count=1 00:18:13.154 17:05:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:13.154 17:05:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:13.154 17:05:28 -- common/autotest_common.sh@901 -- # local max=10 00:18:13.154 17:05:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:13.154 17:05:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:13.154 17:05:28 -- host/discovery.sh@74 -- # jq '. | length' 00:18:13.154 17:05:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.154 17:05:28 -- common/autotest_common.sh@10 -- # set +x 00:18:13.154 17:05:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.154 17:05:28 -- host/discovery.sh@74 -- # notification_count=1 00:18:13.154 17:05:28 -- host/discovery.sh@75 -- # notify_id=2 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:13.154 17:05:28 -- common/autotest_common.sh@904 -- # return 0 00:18:13.154 17:05:28 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:13.154 17:05:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.154 17:05:28 -- common/autotest_common.sh@10 -- # set +x 00:18:13.154 [2024-04-18 17:05:28.832404] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:13.154 [2024-04-18 17:05:28.832761] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:13.154 [2024-04-18 17:05:28.832799] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:13.154 17:05:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.154 17:05:28 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:13.154 17:05:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:13.154 17:05:28 -- common/autotest_common.sh@901 -- # local max=10 00:18:13.154 17:05:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:13.154 17:05:28 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:13.154 17:05:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:13.154 17:05:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:13.154 17:05:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.154 17:05:28 -- common/autotest_common.sh@10 -- # set +x 00:18:13.154 17:05:28 -- host/discovery.sh@59 -- # sort 00:18:13.154 17:05:28 -- host/discovery.sh@59 -- # xargs 00:18:13.419 17:05:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.419 17:05:28 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.419 17:05:28 -- common/autotest_common.sh@904 -- # return 0 00:18:13.419 17:05:28 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:13.419 17:05:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:13.419 17:05:28 -- common/autotest_common.sh@901 -- # local max=10 00:18:13.419 17:05:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.419 17:05:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:13.419 17:05:28 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:13.419 17:05:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:13.419 17:05:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.419 17:05:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:13.419 17:05:28 -- common/autotest_common.sh@10 -- # set +x 00:18:13.419 17:05:28 -- host/discovery.sh@55 -- # sort 00:18:13.419 17:05:28 -- host/discovery.sh@55 -- # xargs 00:18:13.419 17:05:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.420 17:05:28 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:13.420 17:05:28 -- common/autotest_common.sh@904 -- # return 0 00:18:13.420 17:05:28 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:13.420 17:05:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:13.420 17:05:28 -- common/autotest_common.sh@901 -- # local max=10 00:18:13.420 17:05:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.420 17:05:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:13.420 17:05:28 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:13.420 17:05:28 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:13.420 17:05:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.420 17:05:28 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:13.420 17:05:28 -- common/autotest_common.sh@10 -- # set +x 00:18:13.420 17:05:28 -- host/discovery.sh@63 -- # sort -n 00:18:13.420 17:05:28 -- host/discovery.sh@63 -- # xargs 00:18:13.420 17:05:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.420 17:05:28 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:13.420 17:05:28 -- common/autotest_common.sh@906 -- # sleep 1 00:18:13.420 [2024-04-18 17:05:28.962459] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:13.420 [2024-04-18 17:05:29.064205] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:13.420 [2024-04-18 17:05:29.064226] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:13.420 [2024-04-18 17:05:29.064235] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:14.358 17:05:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:14.358 17:05:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:14.358 17:05:29 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:14.358 17:05:29 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:14.358 17:05:29 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:14.358 17:05:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.358 17:05:29 -- common/autotest_common.sh@10 -- # set +x 00:18:14.358 17:05:29 -- host/discovery.sh@63 -- # sort -n 00:18:14.358 17:05:29 -- host/discovery.sh@63 -- # xargs 00:18:14.358 17:05:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.358 17:05:30 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:14.358 17:05:30 -- common/autotest_common.sh@904 -- # return 0 00:18:14.358 17:05:30 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:14.358 17:05:30 -- host/discovery.sh@79 -- # expected_count=0 00:18:14.358 17:05:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:14.358 17:05:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:14.358 17:05:30 -- common/autotest_common.sh@901 -- # local max=10 00:18:14.358 17:05:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:14.358 17:05:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:14.358 17:05:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:14.358 17:05:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:14.358 17:05:30 -- host/discovery.sh@74 -- # jq '. | length' 00:18:14.358 17:05:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.358 17:05:30 -- common/autotest_common.sh@10 -- # set +x 00:18:14.358 17:05:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.358 17:05:30 -- host/discovery.sh@74 -- # notification_count=0 00:18:14.358 17:05:30 -- host/discovery.sh@75 -- # notify_id=2 00:18:14.358 17:05:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:14.358 17:05:30 -- common/autotest_common.sh@904 -- # return 0 00:18:14.358 17:05:30 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:14.358 17:05:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.358 17:05:30 -- common/autotest_common.sh@10 -- # set +x 00:18:14.358 [2024-04-18 17:05:30.060310] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:14.358 [2024-04-18 17:05:30.060370] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:14.358 17:05:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.358 17:05:30 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:14.617 17:05:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:14.617 17:05:30 -- common/autotest_common.sh@901 -- # local max=10 00:18:14.617 17:05:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:14.617 [2024-04-18 17:05:30.064882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.617 [2024-04-18 17:05:30.064937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.617 [2024-04-18 17:05:30.064955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.617 [2024-04-18 17:05:30.064969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.617 [2024-04-18 17:05:30.064988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.617 [2024-04-18 17:05:30.065001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.617 [2024-04-18 17:05:30.065016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.617 [2024-04-18 17:05:30.065030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.617 17:05:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:14.617 [2024-04-18 17:05:30.065044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efba20 is same with the state(5) to be set 00:18:14.617 17:05:30 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:14.617 17:05:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:14.617 17:05:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.617 17:05:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:14.617 17:05:30 -- common/autotest_common.sh@10 -- # set +x 00:18:14.617 17:05:30 -- host/discovery.sh@59 -- # sort 00:18:14.617 17:05:30 -- host/discovery.sh@59 -- # xargs 00:18:14.617 [2024-04-18 17:05:30.074889] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efba20 (9): Bad file descriptor 00:18:14.617 17:05:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.617 [2024-04-18 17:05:30.084928] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:14.617 [2024-04-18 17:05:30.085200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.085322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.085349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efba20 with addr=10.0.0.2, port=4420 00:18:14.617 [2024-04-18 17:05:30.085366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efba20 is same with the state(5) to be set 00:18:14.617 [2024-04-18 17:05:30.085398] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efba20 (9): Bad file descriptor 00:18:14.617 [2024-04-18 17:05:30.085434] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:14.617 [2024-04-18 17:05:30.085453] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:14.617 [2024-04-18 17:05:30.085469] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:14.617 [2024-04-18 17:05:30.085489] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:14.617 [2024-04-18 17:05:30.095004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:14.617 [2024-04-18 17:05:30.095221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.095434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.095461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efba20 with addr=10.0.0.2, port=4420 00:18:14.617 [2024-04-18 17:05:30.095483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efba20 is same with the state(5) to be set 00:18:14.617 [2024-04-18 17:05:30.095518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efba20 (9): Bad file descriptor 00:18:14.617 [2024-04-18 17:05:30.095552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:14.617 [2024-04-18 17:05:30.095570] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:14.617 [2024-04-18 17:05:30.095584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:14.617 [2024-04-18 17:05:30.095603] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:14.617 [2024-04-18 17:05:30.105073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:14.617 [2024-04-18 17:05:30.105291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.105442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.105470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efba20 with addr=10.0.0.2, port=4420 00:18:14.617 [2024-04-18 17:05:30.105487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efba20 is same with the state(5) to be set 00:18:14.617 [2024-04-18 17:05:30.105510] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efba20 (9): Bad file descriptor 00:18:14.617 [2024-04-18 17:05:30.105531] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:14.617 [2024-04-18 17:05:30.105545] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:14.617 [2024-04-18 17:05:30.105558] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:14.617 [2024-04-18 17:05:30.105577] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:14.617 17:05:30 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.617 17:05:30 -- common/autotest_common.sh@904 -- # return 0 00:18:14.617 17:05:30 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:14.617 17:05:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:14.617 17:05:30 -- common/autotest_common.sh@901 -- # local max=10 00:18:14.617 17:05:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:14.617 17:05:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:14.617 17:05:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:14.617 17:05:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:14.617 17:05:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.617 17:05:30 -- common/autotest_common.sh@10 -- # set +x 00:18:14.617 17:05:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:14.617 17:05:30 -- host/discovery.sh@55 -- # sort 00:18:14.617 17:05:30 -- host/discovery.sh@55 -- # xargs 00:18:14.617 [2024-04-18 17:05:30.115887] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:14.617 [2024-04-18 17:05:30.116099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.116254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.116281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efba20 with addr=10.0.0.2, port=4420 00:18:14.617 [2024-04-18 17:05:30.116297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efba20 is same with the state(5) to be set 00:18:14.617 [2024-04-18 17:05:30.116320] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efba20 (9): Bad file descriptor 00:18:14.617 [2024-04-18 17:05:30.116340] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:14.617 [2024-04-18 17:05:30.116360] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:14.617 [2024-04-18 17:05:30.116375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:14.617 [2024-04-18 17:05:30.116405] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:14.617 [2024-04-18 17:05:30.125960] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:14.617 [2024-04-18 17:05:30.126132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.126284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.126311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efba20 with addr=10.0.0.2, port=4420 00:18:14.617 [2024-04-18 17:05:30.126328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efba20 is same with the state(5) to be set 00:18:14.617 [2024-04-18 17:05:30.126349] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efba20 (9): Bad file descriptor 00:18:14.617 [2024-04-18 17:05:30.126370] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:14.617 [2024-04-18 17:05:30.126394] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:14.617 [2024-04-18 17:05:30.126410] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:14.617 [2024-04-18 17:05:30.126434] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:14.617 [2024-04-18 17:05:30.136031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:14.617 [2024-04-18 17:05:30.136210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.136338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.136365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efba20 with addr=10.0.0.2, port=4420 00:18:14.617 [2024-04-18 17:05:30.136389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efba20 is same with the state(5) to be set 00:18:14.617 [2024-04-18 17:05:30.136413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efba20 (9): Bad file descriptor 00:18:14.617 [2024-04-18 17:05:30.136439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:14.617 [2024-04-18 17:05:30.136453] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:14.617 [2024-04-18 17:05:30.136466] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:14.617 [2024-04-18 17:05:30.136485] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:14.617 17:05:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.617 [2024-04-18 17:05:30.146100] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:14.617 [2024-04-18 17:05:30.146320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.146495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.617 [2024-04-18 17:05:30.146523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efba20 with addr=10.0.0.2, port=4420 00:18:14.618 [2024-04-18 17:05:30.146539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efba20 is same with the state(5) to be set 00:18:14.618 [2024-04-18 17:05:30.146561] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efba20 (9): Bad file descriptor 00:18:14.618 [2024-04-18 17:05:30.146582] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:14.618 [2024-04-18 17:05:30.146596] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:14.618 [2024-04-18 17:05:30.146615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:14.618 [2024-04-18 17:05:30.146635] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:14.618 [2024-04-18 17:05:30.146988] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:14.618 [2024-04-18 17:05:30.147016] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:14.618 17:05:30 -- common/autotest_common.sh@904 -- # return 0 00:18:14.618 17:05:30 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:14.618 17:05:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:14.618 17:05:30 -- common/autotest_common.sh@901 -- # local max=10 00:18:14.618 17:05:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:14.618 17:05:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:14.618 17:05:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.618 17:05:30 -- common/autotest_common.sh@10 -- # set +x 00:18:14.618 17:05:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:14.618 17:05:30 -- host/discovery.sh@63 -- # sort -n 00:18:14.618 17:05:30 -- host/discovery.sh@63 -- # xargs 00:18:14.618 17:05:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:18:14.618 17:05:30 -- common/autotest_common.sh@904 -- # return 0 00:18:14.618 17:05:30 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:14.618 17:05:30 -- host/discovery.sh@79 -- # expected_count=0 00:18:14.618 17:05:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:14.618 17:05:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:14.618 17:05:30 -- common/autotest_common.sh@901 -- # local max=10 00:18:14.618 17:05:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:14.618 17:05:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:14.618 17:05:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.618 17:05:30 -- host/discovery.sh@74 -- # jq '. | length' 00:18:14.618 17:05:30 -- common/autotest_common.sh@10 -- # set +x 00:18:14.618 17:05:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.618 17:05:30 -- host/discovery.sh@74 -- # notification_count=0 00:18:14.618 17:05:30 -- host/discovery.sh@75 -- # notify_id=2 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:14.618 17:05:30 -- common/autotest_common.sh@904 -- # return 0 00:18:14.618 17:05:30 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:14.618 17:05:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.618 17:05:30 -- common/autotest_common.sh@10 -- # set +x 00:18:14.618 17:05:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.618 17:05:30 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:14.618 17:05:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:14.618 17:05:30 -- common/autotest_common.sh@901 -- # local max=10 00:18:14.618 17:05:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:14.618 17:05:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:14.618 17:05:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.618 17:05:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:14.618 17:05:30 -- common/autotest_common.sh@10 -- # set +x 00:18:14.618 17:05:30 -- host/discovery.sh@59 -- # sort 00:18:14.618 17:05:30 -- host/discovery.sh@59 -- # xargs 00:18:14.618 17:05:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:18:14.618 17:05:30 -- common/autotest_common.sh@904 -- # return 0 00:18:14.618 17:05:30 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:14.618 17:05:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:14.618 17:05:30 -- common/autotest_common.sh@901 -- # local max=10 00:18:14.618 17:05:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:14.618 17:05:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:14.618 17:05:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.618 17:05:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:14.618 17:05:30 -- common/autotest_common.sh@10 -- # set +x 00:18:14.618 17:05:30 -- host/discovery.sh@55 -- # sort 00:18:14.618 17:05:30 -- host/discovery.sh@55 -- # xargs 00:18:14.618 17:05:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:18:14.618 17:05:30 -- common/autotest_common.sh@904 -- # return 0 00:18:14.618 17:05:30 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:14.618 17:05:30 -- host/discovery.sh@79 -- # expected_count=2 00:18:14.618 17:05:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:14.618 17:05:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:14.618 17:05:30 -- common/autotest_common.sh@901 -- # local max=10 00:18:14.618 17:05:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:14.618 17:05:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:14.875 17:05:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:14.875 17:05:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:14.875 17:05:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.875 17:05:30 -- host/discovery.sh@74 -- # jq '. | length' 00:18:14.875 17:05:30 -- common/autotest_common.sh@10 -- # set +x 00:18:14.875 17:05:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.875 17:05:30 -- host/discovery.sh@74 -- # notification_count=2 00:18:14.875 17:05:30 -- host/discovery.sh@75 -- # notify_id=4 00:18:14.875 17:05:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:14.875 17:05:30 -- common/autotest_common.sh@904 -- # return 0 00:18:14.875 17:05:30 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.875 17:05:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.875 17:05:30 -- common/autotest_common.sh@10 -- # set +x 00:18:15.812 [2024-04-18 17:05:31.408532] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:15.812 [2024-04-18 17:05:31.408564] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:15.812 [2024-04-18 17:05:31.408586] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:15.812 [2024-04-18 17:05:31.494841] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:16.377 [2024-04-18 17:05:31.804836] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:16.378 [2024-04-18 17:05:31.804868] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:16.378 17:05:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.378 17:05:31 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:16.378 17:05:31 -- common/autotest_common.sh@638 -- # local es=0 00:18:16.378 17:05:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:16.378 17:05:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:16.378 17:05:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:16.378 17:05:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:16.378 17:05:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:16.378 17:05:31 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:16.378 17:05:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.378 17:05:31 -- common/autotest_common.sh@10 -- # set +x 00:18:16.378 request: 00:18:16.378 { 00:18:16.378 "name": "nvme", 00:18:16.378 "trtype": "tcp", 00:18:16.378 "traddr": "10.0.0.2", 00:18:16.378 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:16.378 "adrfam": "ipv4", 00:18:16.378 "trsvcid": "8009", 00:18:16.378 "wait_for_attach": true, 00:18:16.378 "method": "bdev_nvme_start_discovery", 00:18:16.378 "req_id": 1 00:18:16.378 } 00:18:16.378 Got JSON-RPC error response 00:18:16.378 response: 00:18:16.378 { 00:18:16.378 "code": -17, 00:18:16.378 "message": "File exists" 00:18:16.378 } 00:18:16.378 17:05:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:16.378 17:05:31 -- common/autotest_common.sh@641 -- # es=1 00:18:16.378 17:05:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:16.378 17:05:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:16.378 17:05:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:16.378 17:05:31 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:16.378 17:05:31 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:16.378 17:05:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.378 17:05:31 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:16.378 17:05:31 -- common/autotest_common.sh@10 -- # set +x 00:18:16.378 17:05:31 -- host/discovery.sh@67 -- # sort 00:18:16.378 17:05:31 -- host/discovery.sh@67 -- # xargs 00:18:16.378 17:05:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.378 17:05:31 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:16.378 17:05:31 -- host/discovery.sh@146 -- # get_bdev_list 00:18:16.378 17:05:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:16.378 17:05:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.378 17:05:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:16.378 17:05:31 -- common/autotest_common.sh@10 -- # set +x 00:18:16.378 17:05:31 -- host/discovery.sh@55 -- # sort 00:18:16.378 17:05:31 -- host/discovery.sh@55 -- # xargs 00:18:16.378 17:05:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.378 17:05:31 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:16.378 17:05:31 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:16.378 17:05:31 -- common/autotest_common.sh@638 -- # local es=0 00:18:16.378 17:05:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:16.378 17:05:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:16.378 17:05:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:16.378 17:05:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:16.378 17:05:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:16.378 17:05:31 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:16.378 17:05:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.378 17:05:31 -- common/autotest_common.sh@10 -- # set +x 00:18:16.378 request: 00:18:16.378 { 00:18:16.378 "name": "nvme_second", 00:18:16.378 "trtype": "tcp", 00:18:16.378 "traddr": "10.0.0.2", 00:18:16.378 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:16.378 "adrfam": "ipv4", 00:18:16.378 "trsvcid": "8009", 00:18:16.378 "wait_for_attach": true, 00:18:16.378 "method": "bdev_nvme_start_discovery", 00:18:16.378 "req_id": 1 00:18:16.378 } 00:18:16.378 Got JSON-RPC error response 00:18:16.378 response: 00:18:16.378 { 00:18:16.378 "code": -17, 00:18:16.378 "message": "File exists" 00:18:16.378 } 00:18:16.378 17:05:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:16.378 17:05:31 -- common/autotest_common.sh@641 -- # es=1 00:18:16.378 17:05:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:16.378 17:05:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:16.378 17:05:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:16.378 17:05:31 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:16.378 17:05:31 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:16.378 17:05:31 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:16.378 17:05:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.378 17:05:31 -- common/autotest_common.sh@10 -- # set +x 00:18:16.378 17:05:31 -- host/discovery.sh@67 -- # sort 00:18:16.378 17:05:31 -- host/discovery.sh@67 -- # xargs 00:18:16.378 17:05:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.378 17:05:31 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:16.378 17:05:31 -- host/discovery.sh@152 -- # get_bdev_list 00:18:16.378 17:05:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:16.378 17:05:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.378 17:05:31 -- common/autotest_common.sh@10 -- # set +x 00:18:16.378 17:05:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:16.378 17:05:31 -- host/discovery.sh@55 -- # sort 00:18:16.378 17:05:31 -- host/discovery.sh@55 -- # xargs 00:18:16.378 17:05:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.378 17:05:31 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:16.378 17:05:31 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:16.378 17:05:31 -- common/autotest_common.sh@638 -- # local es=0 00:18:16.378 17:05:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:16.378 17:05:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:16.378 17:05:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:16.378 17:05:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:16.378 17:05:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:16.378 17:05:31 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:16.378 17:05:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.378 17:05:31 -- common/autotest_common.sh@10 -- # set +x 00:18:17.316 [2024-04-18 17:05:32.996159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.316 [2024-04-18 17:05:32.996364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.316 [2024-04-18 17:05:32.996406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef7cc0 with addr=10.0.0.2, port=8010 00:18:17.316 [2024-04-18 17:05:32.996448] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:17.316 [2024-04-18 17:05:32.996463] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:17.316 [2024-04-18 17:05:32.996476] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:18.688 [2024-04-18 17:05:33.998608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.688 [2024-04-18 17:05:33.998783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.688 [2024-04-18 17:05:33.998810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f12b10 with addr=10.0.0.2, port=8010 00:18:18.688 [2024-04-18 17:05:33.998829] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:18.688 [2024-04-18 17:05:33.998841] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:18.688 [2024-04-18 17:05:33.998853] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:19.625 [2024-04-18 17:05:35.000847] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:19.625 request: 00:18:19.625 { 00:18:19.625 "name": "nvme_second", 00:18:19.625 "trtype": "tcp", 00:18:19.625 "traddr": "10.0.0.2", 00:18:19.625 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:19.625 "adrfam": "ipv4", 00:18:19.625 "trsvcid": "8010", 00:18:19.625 "attach_timeout_ms": 3000, 00:18:19.625 "method": "bdev_nvme_start_discovery", 00:18:19.625 "req_id": 1 00:18:19.625 } 00:18:19.625 Got JSON-RPC error response 00:18:19.625 response: 00:18:19.625 { 00:18:19.625 "code": -110, 00:18:19.625 "message": "Connection timed out" 00:18:19.625 } 00:18:19.625 17:05:35 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:19.625 17:05:35 -- common/autotest_common.sh@641 -- # es=1 00:18:19.625 17:05:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:19.625 17:05:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:19.625 17:05:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:19.625 17:05:35 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:19.625 17:05:35 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:19.625 17:05:35 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:19.625 17:05:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.625 17:05:35 -- common/autotest_common.sh@10 -- # set +x 00:18:19.625 17:05:35 -- host/discovery.sh@67 -- # sort 00:18:19.625 17:05:35 -- host/discovery.sh@67 -- # xargs 00:18:19.625 17:05:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.625 17:05:35 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:19.625 17:05:35 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:19.625 17:05:35 -- host/discovery.sh@161 -- # kill 1735370 00:18:19.625 17:05:35 -- host/discovery.sh@162 -- # nvmftestfini 00:18:19.625 17:05:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:19.625 17:05:35 -- nvmf/common.sh@117 -- # sync 00:18:19.625 17:05:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.625 17:05:35 -- nvmf/common.sh@120 -- # set +e 00:18:19.625 17:05:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.625 17:05:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.625 rmmod nvme_tcp 00:18:19.625 rmmod nvme_fabrics 00:18:19.625 rmmod nvme_keyring 00:18:19.625 17:05:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.625 17:05:35 -- nvmf/common.sh@124 -- # set -e 00:18:19.625 17:05:35 -- nvmf/common.sh@125 -- # return 0 00:18:19.625 17:05:35 -- nvmf/common.sh@478 -- # '[' -n 1735218 ']' 00:18:19.625 17:05:35 -- nvmf/common.sh@479 -- # killprocess 1735218 00:18:19.625 17:05:35 -- common/autotest_common.sh@936 -- # '[' -z 1735218 ']' 00:18:19.625 17:05:35 -- common/autotest_common.sh@940 -- # kill -0 1735218 00:18:19.625 17:05:35 -- common/autotest_common.sh@941 -- # uname 00:18:19.625 17:05:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.625 17:05:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1735218 00:18:19.625 17:05:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:19.625 17:05:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:19.625 17:05:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1735218' 00:18:19.625 killing process with pid 1735218 00:18:19.625 17:05:35 -- common/autotest_common.sh@955 -- # kill 1735218 00:18:19.625 17:05:35 -- common/autotest_common.sh@960 -- # wait 1735218 00:18:19.883 17:05:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:19.883 17:05:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:19.883 17:05:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:19.883 17:05:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.883 17:05:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.883 17:05:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.883 17:05:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.883 17:05:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.788 17:05:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:21.788 00:18:21.788 real 0m14.002s 00:18:21.788 user 0m20.096s 00:18:21.788 sys 0m2.851s 00:18:21.788 17:05:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:21.788 17:05:37 -- common/autotest_common.sh@10 -- # set +x 00:18:21.788 ************************************ 00:18:21.788 END TEST nvmf_discovery 00:18:21.788 ************************************ 00:18:21.788 17:05:37 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:21.788 17:05:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:21.788 17:05:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:21.788 17:05:37 -- common/autotest_common.sh@10 -- # set +x 00:18:22.046 ************************************ 00:18:22.046 START TEST nvmf_discovery_remove_ifc 00:18:22.046 ************************************ 00:18:22.046 17:05:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:22.046 * Looking for test storage... 00:18:22.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:22.046 17:05:37 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.046 17:05:37 -- nvmf/common.sh@7 -- # uname -s 00:18:22.046 17:05:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.046 17:05:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.046 17:05:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.046 17:05:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.046 17:05:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.046 17:05:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.046 17:05:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.046 17:05:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.046 17:05:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.046 17:05:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.046 17:05:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.046 17:05:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.046 17:05:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.046 17:05:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.046 17:05:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.046 17:05:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.046 17:05:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.046 17:05:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.046 17:05:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.046 17:05:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.047 17:05:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.047 17:05:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.047 17:05:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.047 17:05:37 -- paths/export.sh@5 -- # export PATH 00:18:22.047 17:05:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.047 17:05:37 -- nvmf/common.sh@47 -- # : 0 00:18:22.047 17:05:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.047 17:05:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.047 17:05:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.047 17:05:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.047 17:05:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.047 17:05:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.047 17:05:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.047 17:05:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.047 17:05:37 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:22.047 17:05:37 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:22.047 17:05:37 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:22.047 17:05:37 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:22.047 17:05:37 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:22.047 17:05:37 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:22.047 17:05:37 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:22.047 17:05:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:22.047 17:05:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.047 17:05:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:22.047 17:05:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:22.047 17:05:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:22.047 17:05:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.047 17:05:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.047 17:05:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.047 17:05:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:22.047 17:05:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:22.047 17:05:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:22.047 17:05:37 -- common/autotest_common.sh@10 -- # set +x 00:18:23.950 17:05:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:23.950 17:05:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.950 17:05:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.950 17:05:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.950 17:05:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.950 17:05:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.950 17:05:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.950 17:05:39 -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.950 17:05:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.950 17:05:39 -- nvmf/common.sh@296 -- # e810=() 00:18:23.950 17:05:39 -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.950 17:05:39 -- nvmf/common.sh@297 -- # x722=() 00:18:23.950 17:05:39 -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.950 17:05:39 -- nvmf/common.sh@298 -- # mlx=() 00:18:23.950 17:05:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.950 17:05:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.950 17:05:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.951 17:05:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.951 17:05:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.951 17:05:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.951 17:05:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.951 17:05:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.951 17:05:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.951 17:05:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.951 17:05:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.951 17:05:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.951 17:05:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.951 17:05:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.951 17:05:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.951 17:05:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.951 17:05:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:23.951 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:23.951 17:05:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.951 17:05:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:23.951 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:23.951 17:05:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.951 17:05:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.951 17:05:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.951 17:05:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:23.951 17:05:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.951 17:05:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:23.951 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:23.951 17:05:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.951 17:05:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.951 17:05:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.951 17:05:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:23.951 17:05:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.951 17:05:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:23.951 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:23.951 17:05:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.951 17:05:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:23.951 17:05:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:23.951 17:05:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:23.951 17:05:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:23.951 17:05:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.951 17:05:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.951 17:05:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.951 17:05:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.951 17:05:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.951 17:05:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.951 17:05:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.951 17:05:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.951 17:05:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.951 17:05:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.951 17:05:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.951 17:05:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.951 17:05:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.951 17:05:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.951 17:05:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.951 17:05:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.951 17:05:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:24.210 17:05:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:24.210 17:05:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:24.210 17:05:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:24.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:18:24.210 00:18:24.210 --- 10.0.0.2 ping statistics --- 00:18:24.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.210 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:18:24.210 17:05:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:24.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:18:24.210 00:18:24.210 --- 10.0.0.1 ping statistics --- 00:18:24.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.210 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:18:24.210 17:05:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.210 17:05:39 -- nvmf/common.sh@411 -- # return 0 00:18:24.210 17:05:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:24.210 17:05:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.210 17:05:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:24.210 17:05:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:24.210 17:05:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.210 17:05:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:24.210 17:05:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:24.210 17:05:39 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:24.210 17:05:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:24.210 17:05:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:24.210 17:05:39 -- common/autotest_common.sh@10 -- # set +x 00:18:24.210 17:05:39 -- nvmf/common.sh@470 -- # nvmfpid=1738414 00:18:24.210 17:05:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.210 17:05:39 -- nvmf/common.sh@471 -- # waitforlisten 1738414 00:18:24.210 17:05:39 -- common/autotest_common.sh@817 -- # '[' -z 1738414 ']' 00:18:24.210 17:05:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.210 17:05:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:24.211 17:05:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.211 17:05:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:24.211 17:05:39 -- common/autotest_common.sh@10 -- # set +x 00:18:24.211 [2024-04-18 17:05:39.769562] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:18:24.211 [2024-04-18 17:05:39.769659] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.211 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.211 [2024-04-18 17:05:39.834494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.469 [2024-04-18 17:05:39.941866] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.469 [2024-04-18 17:05:39.941940] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.469 [2024-04-18 17:05:39.941955] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.469 [2024-04-18 17:05:39.941966] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.469 [2024-04-18 17:05:39.941975] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.469 [2024-04-18 17:05:39.942016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.469 17:05:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.469 17:05:40 -- common/autotest_common.sh@850 -- # return 0 00:18:24.469 17:05:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:24.469 17:05:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:24.469 17:05:40 -- common/autotest_common.sh@10 -- # set +x 00:18:24.469 17:05:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.469 17:05:40 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:24.469 17:05:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.469 17:05:40 -- common/autotest_common.sh@10 -- # set +x 00:18:24.469 [2024-04-18 17:05:40.103526] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.469 [2024-04-18 17:05:40.111703] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:24.469 null0 00:18:24.469 [2024-04-18 17:05:40.143619] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.469 17:05:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.469 17:05:40 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1738510 00:18:24.469 17:05:40 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:24.469 17:05:40 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1738510 /tmp/host.sock 00:18:24.469 17:05:40 -- common/autotest_common.sh@817 -- # '[' -z 1738510 ']' 00:18:24.469 17:05:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:18:24.469 17:05:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:24.469 17:05:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:24.469 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:24.469 17:05:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:24.469 17:05:40 -- common/autotest_common.sh@10 -- # set +x 00:18:24.730 [2024-04-18 17:05:40.212456] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:18:24.730 [2024-04-18 17:05:40.212545] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738510 ] 00:18:24.730 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.730 [2024-04-18 17:05:40.276882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.730 [2024-04-18 17:05:40.395998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.730 17:05:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.730 17:05:40 -- common/autotest_common.sh@850 -- # return 0 00:18:24.730 17:05:40 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:24.730 17:05:40 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:24.992 17:05:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.992 17:05:40 -- common/autotest_common.sh@10 -- # set +x 00:18:24.992 17:05:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.992 17:05:40 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:24.992 17:05:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.992 17:05:40 -- common/autotest_common.sh@10 -- # set +x 00:18:24.992 17:05:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.992 17:05:40 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:24.992 17:05:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.992 17:05:40 -- common/autotest_common.sh@10 -- # set +x 00:18:25.927 [2024-04-18 17:05:41.611116] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:25.927 [2024-04-18 17:05:41.611155] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:25.927 [2024-04-18 17:05:41.611182] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:26.185 [2024-04-18 17:05:41.739604] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:26.185 [2024-04-18 17:05:41.802110] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:26.185 [2024-04-18 17:05:41.802177] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:26.185 [2024-04-18 17:05:41.802221] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:26.185 [2024-04-18 17:05:41.802251] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:26.185 [2024-04-18 17:05:41.802291] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:26.185 17:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.185 17:05:41 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:26.185 17:05:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:26.185 17:05:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.185 17:05:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:26.185 17:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.185 17:05:41 -- common/autotest_common.sh@10 -- # set +x 00:18:26.185 17:05:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:26.185 17:05:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:26.185 [2024-04-18 17:05:41.809568] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbde280 was disconnected and freed. delete nvme_qpair. 00:18:26.185 17:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.185 17:05:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:26.185 17:05:41 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:18:26.185 17:05:41 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:18:26.442 17:05:41 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:26.442 17:05:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:26.442 17:05:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.442 17:05:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:26.442 17:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.442 17:05:41 -- common/autotest_common.sh@10 -- # set +x 00:18:26.442 17:05:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:26.442 17:05:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:26.442 17:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.442 17:05:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:26.442 17:05:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:27.379 17:05:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:27.379 17:05:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:27.379 17:05:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:27.379 17:05:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.379 17:05:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:27.379 17:05:42 -- common/autotest_common.sh@10 -- # set +x 00:18:27.379 17:05:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:27.379 17:05:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.379 17:05:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:27.379 17:05:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:28.312 17:05:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:28.312 17:05:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.312 17:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.313 17:05:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:28.313 17:05:43 -- common/autotest_common.sh@10 -- # set +x 00:18:28.313 17:05:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:28.313 17:05:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:28.313 17:05:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.571 17:05:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:28.571 17:05:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:29.507 17:05:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:29.507 17:05:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:29.507 17:05:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:29.507 17:05:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:29.507 17:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:29.507 17:05:45 -- common/autotest_common.sh@10 -- # set +x 00:18:29.507 17:05:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:29.507 17:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:29.507 17:05:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:29.507 17:05:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:30.442 17:05:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:30.442 17:05:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:30.442 17:05:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:30.442 17:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.442 17:05:46 -- common/autotest_common.sh@10 -- # set +x 00:18:30.442 17:05:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:30.442 17:05:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:30.442 17:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.442 17:05:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:30.442 17:05:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:31.817 17:05:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:31.817 17:05:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.817 17:05:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.817 17:05:47 -- common/autotest_common.sh@10 -- # set +x 00:18:31.817 17:05:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:31.817 17:05:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:31.817 17:05:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:31.817 17:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.817 17:05:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:31.817 17:05:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:31.817 [2024-04-18 17:05:47.243235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:31.817 [2024-04-18 17:05:47.243302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.817 [2024-04-18 17:05:47.243325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.817 [2024-04-18 17:05:47.243344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.817 [2024-04-18 17:05:47.243359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.817 [2024-04-18 17:05:47.243374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.817 [2024-04-18 17:05:47.243397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.817 [2024-04-18 17:05:47.243429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.817 [2024-04-18 17:05:47.243442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.817 [2024-04-18 17:05:47.243456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.817 [2024-04-18 17:05:47.243468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.817 [2024-04-18 17:05:47.243480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba47a0 is same with the state(5) to be set 00:18:31.817 [2024-04-18 17:05:47.253254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba47a0 (9): Bad file descriptor 00:18:31.817 [2024-04-18 17:05:47.263300] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:32.753 17:05:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:32.753 17:05:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:32.753 17:05:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:32.753 17:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:32.753 17:05:48 -- common/autotest_common.sh@10 -- # set +x 00:18:32.753 17:05:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:32.753 17:05:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:32.753 [2024-04-18 17:05:48.274408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:33.692 [2024-04-18 17:05:49.298430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:33.692 [2024-04-18 17:05:49.298519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba47a0 with addr=10.0.0.2, port=4420 00:18:33.692 [2024-04-18 17:05:49.298548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba47a0 is same with the state(5) to be set 00:18:33.692 [2024-04-18 17:05:49.299055] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba47a0 (9): Bad file descriptor 00:18:33.692 [2024-04-18 17:05:49.299102] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:33.692 [2024-04-18 17:05:49.299149] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:33.692 [2024-04-18 17:05:49.299192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.692 [2024-04-18 17:05:49.299216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.692 [2024-04-18 17:05:49.299243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.692 [2024-04-18 17:05:49.299259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.692 [2024-04-18 17:05:49.299275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.692 [2024-04-18 17:05:49.299290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.692 [2024-04-18 17:05:49.299306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.692 [2024-04-18 17:05:49.299320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.692 [2024-04-18 17:05:49.299335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.692 [2024-04-18 17:05:49.299350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.692 [2024-04-18 17:05:49.299364] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:33.692 [2024-04-18 17:05:49.299588] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba4bb0 (9): Bad file descriptor 00:18:33.692 [2024-04-18 17:05:49.300605] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:33.692 [2024-04-18 17:05:49.300626] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:33.692 17:05:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:33.692 17:05:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:33.692 17:05:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:34.667 17:05:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:34.667 17:05:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:34.667 17:05:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:34.667 17:05:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.667 17:05:50 -- common/autotest_common.sh@10 -- # set +x 00:18:34.667 17:05:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:34.667 17:05:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:34.667 17:05:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.667 17:05:50 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:34.667 17:05:50 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:34.667 17:05:50 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:34.926 17:05:50 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:34.926 17:05:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:34.926 17:05:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:34.926 17:05:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:34.926 17:05:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.926 17:05:50 -- common/autotest_common.sh@10 -- # set +x 00:18:34.926 17:05:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:34.926 17:05:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:34.926 17:05:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.926 17:05:50 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:34.926 17:05:50 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:35.862 [2024-04-18 17:05:51.358155] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:35.862 [2024-04-18 17:05:51.358200] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:35.862 [2024-04-18 17:05:51.358226] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:35.862 17:05:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:35.862 17:05:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.862 17:05:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:35.862 17:05:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:35.862 17:05:51 -- common/autotest_common.sh@10 -- # set +x 00:18:35.862 17:05:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:35.862 17:05:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:35.862 17:05:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:35.862 17:05:51 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:35.862 17:05:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:35.862 [2024-04-18 17:05:51.486815] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:36.121 [2024-04-18 17:05:51.668269] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:36.121 [2024-04-18 17:05:51.668323] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:36.121 [2024-04-18 17:05:51.668360] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:36.121 [2024-04-18 17:05:51.668396] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:36.121 [2024-04-18 17:05:51.668413] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:36.121 [2024-04-18 17:05:51.675928] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbe89f0 was disconnected and freed. delete nvme_qpair. 00:18:37.058 17:05:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:37.058 17:05:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.058 17:05:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:37.058 17:05:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.058 17:05:52 -- common/autotest_common.sh@10 -- # set +x 00:18:37.058 17:05:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:37.058 17:05:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:37.058 17:05:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.058 17:05:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:37.058 17:05:52 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:37.058 17:05:52 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1738510 00:18:37.058 17:05:52 -- common/autotest_common.sh@936 -- # '[' -z 1738510 ']' 00:18:37.058 17:05:52 -- common/autotest_common.sh@940 -- # kill -0 1738510 00:18:37.058 17:05:52 -- common/autotest_common.sh@941 -- # uname 00:18:37.058 17:05:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:37.058 17:05:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1738510 00:18:37.058 17:05:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:37.058 17:05:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:37.058 17:05:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1738510' 00:18:37.058 killing process with pid 1738510 00:18:37.058 17:05:52 -- common/autotest_common.sh@955 -- # kill 1738510 00:18:37.058 17:05:52 -- common/autotest_common.sh@960 -- # wait 1738510 00:18:37.318 17:05:52 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:37.318 17:05:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:37.318 17:05:52 -- nvmf/common.sh@117 -- # sync 00:18:37.318 17:05:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:37.318 17:05:52 -- nvmf/common.sh@120 -- # set +e 00:18:37.318 17:05:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:37.319 17:05:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:37.319 rmmod nvme_tcp 00:18:37.319 rmmod nvme_fabrics 00:18:37.319 rmmod nvme_keyring 00:18:37.319 17:05:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:37.319 17:05:52 -- nvmf/common.sh@124 -- # set -e 00:18:37.319 17:05:52 -- nvmf/common.sh@125 -- # return 0 00:18:37.319 17:05:52 -- nvmf/common.sh@478 -- # '[' -n 1738414 ']' 00:18:37.319 17:05:52 -- nvmf/common.sh@479 -- # killprocess 1738414 00:18:37.319 17:05:52 -- common/autotest_common.sh@936 -- # '[' -z 1738414 ']' 00:18:37.319 17:05:52 -- common/autotest_common.sh@940 -- # kill -0 1738414 00:18:37.319 17:05:52 -- common/autotest_common.sh@941 -- # uname 00:18:37.319 17:05:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:37.319 17:05:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1738414 00:18:37.319 17:05:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:37.319 17:05:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:37.319 17:05:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1738414' 00:18:37.319 killing process with pid 1738414 00:18:37.319 17:05:52 -- common/autotest_common.sh@955 -- # kill 1738414 00:18:37.319 17:05:52 -- common/autotest_common.sh@960 -- # wait 1738414 00:18:37.578 17:05:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:37.578 17:05:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:37.578 17:05:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:37.578 17:05:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.578 17:05:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.578 17:05:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.578 17:05:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.578 17:05:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.110 17:05:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:40.110 00:18:40.111 real 0m17.661s 00:18:40.111 user 0m24.628s 00:18:40.111 sys 0m2.927s 00:18:40.111 17:05:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:40.111 17:05:55 -- common/autotest_common.sh@10 -- # set +x 00:18:40.111 ************************************ 00:18:40.111 END TEST nvmf_discovery_remove_ifc 00:18:40.111 ************************************ 00:18:40.111 17:05:55 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:40.111 17:05:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:40.111 17:05:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:40.111 17:05:55 -- common/autotest_common.sh@10 -- # set +x 00:18:40.111 ************************************ 00:18:40.111 START TEST nvmf_identify_kernel_target 00:18:40.111 ************************************ 00:18:40.111 17:05:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:40.111 * Looking for test storage... 00:18:40.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:40.111 17:05:55 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.111 17:05:55 -- nvmf/common.sh@7 -- # uname -s 00:18:40.111 17:05:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.111 17:05:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.111 17:05:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.111 17:05:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.111 17:05:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.111 17:05:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.111 17:05:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.111 17:05:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.111 17:05:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.111 17:05:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.111 17:05:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.111 17:05:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.111 17:05:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.111 17:05:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.111 17:05:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.111 17:05:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.111 17:05:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.111 17:05:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.111 17:05:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.111 17:05:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.111 17:05:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.111 17:05:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.111 17:05:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.111 17:05:55 -- paths/export.sh@5 -- # export PATH 00:18:40.111 17:05:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.111 17:05:55 -- nvmf/common.sh@47 -- # : 0 00:18:40.111 17:05:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:40.111 17:05:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:40.111 17:05:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.111 17:05:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.111 17:05:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.111 17:05:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:40.111 17:05:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:40.111 17:05:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:40.111 17:05:55 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:40.111 17:05:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:40.111 17:05:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.111 17:05:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:40.111 17:05:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:40.111 17:05:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:40.111 17:05:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.111 17:05:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.111 17:05:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.111 17:05:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:40.111 17:05:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:40.111 17:05:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:40.111 17:05:55 -- common/autotest_common.sh@10 -- # set +x 00:18:42.010 17:05:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:42.010 17:05:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:42.010 17:05:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:42.010 17:05:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:42.010 17:05:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:42.010 17:05:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:42.010 17:05:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:42.010 17:05:57 -- nvmf/common.sh@295 -- # net_devs=() 00:18:42.010 17:05:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:42.010 17:05:57 -- nvmf/common.sh@296 -- # e810=() 00:18:42.010 17:05:57 -- nvmf/common.sh@296 -- # local -ga e810 00:18:42.010 17:05:57 -- nvmf/common.sh@297 -- # x722=() 00:18:42.010 17:05:57 -- nvmf/common.sh@297 -- # local -ga x722 00:18:42.010 17:05:57 -- nvmf/common.sh@298 -- # mlx=() 00:18:42.010 17:05:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:42.010 17:05:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.010 17:05:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.010 17:05:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.010 17:05:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.010 17:05:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.010 17:05:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.010 17:05:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.010 17:05:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.010 17:05:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.010 17:05:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.010 17:05:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.010 17:05:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:42.010 17:05:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:42.010 17:05:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:42.010 17:05:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:42.010 17:05:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:42.010 17:05:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:42.010 17:05:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.010 17:05:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:42.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:42.010 17:05:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.010 17:05:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.010 17:05:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.010 17:05:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.010 17:05:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.010 17:05:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.011 17:05:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:42.011 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:42.011 17:05:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:42.011 17:05:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.011 17:05:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.011 17:05:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:42.011 17:05:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.011 17:05:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:42.011 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:42.011 17:05:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.011 17:05:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.011 17:05:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.011 17:05:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:42.011 17:05:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.011 17:05:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:42.011 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:42.011 17:05:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.011 17:05:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:42.011 17:05:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:42.011 17:05:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:42.011 17:05:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.011 17:05:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.011 17:05:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.011 17:05:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:42.011 17:05:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.011 17:05:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.011 17:05:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:42.011 17:05:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.011 17:05:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.011 17:05:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:42.011 17:05:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:42.011 17:05:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.011 17:05:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.011 17:05:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.011 17:05:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.011 17:05:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:42.011 17:05:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.011 17:05:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.011 17:05:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.011 17:05:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:42.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:18:42.011 00:18:42.011 --- 10.0.0.2 ping statistics --- 00:18:42.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.011 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:18:42.011 17:05:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:18:42.011 00:18:42.011 --- 10.0.0.1 ping statistics --- 00:18:42.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.011 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:18:42.011 17:05:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.011 17:05:57 -- nvmf/common.sh@411 -- # return 0 00:18:42.011 17:05:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:42.011 17:05:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.011 17:05:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.011 17:05:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:42.011 17:05:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:42.011 17:05:57 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:42.011 17:05:57 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:42.011 17:05:57 -- nvmf/common.sh@717 -- # local ip 00:18:42.011 17:05:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:42.011 17:05:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:42.011 17:05:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.011 17:05:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.011 17:05:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:42.011 17:05:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:42.011 17:05:57 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:42.011 17:05:57 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:42.011 17:05:57 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:42.011 17:05:57 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:18:42.011 17:05:57 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:42.011 17:05:57 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:42.011 17:05:57 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:42.011 17:05:57 -- nvmf/common.sh@628 -- # local block nvme 00:18:42.011 17:05:57 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@631 -- # modprobe nvmet 00:18:42.011 17:05:57 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:42.011 17:05:57 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:18:42.947 Waiting for block devices as requested 00:18:42.947 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:18:43.204 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:43.204 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:43.204 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:43.204 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:43.462 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:43.462 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:43.462 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:43.462 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:43.720 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:43.720 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:43.720 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:43.720 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:43.720 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:43.979 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:43.979 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:43.979 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:44.239 17:05:59 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:18:44.239 17:05:59 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:44.239 17:05:59 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:18:44.239 17:05:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:44.239 17:05:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:44.239 17:05:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:44.239 17:05:59 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:18:44.239 17:05:59 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:44.239 17:05:59 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:18:44.239 No valid GPT data, bailing 00:18:44.239 17:05:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:44.239 17:05:59 -- scripts/common.sh@391 -- # pt= 00:18:44.239 17:05:59 -- scripts/common.sh@392 -- # return 1 00:18:44.239 17:05:59 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:18:44.239 17:05:59 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:18:44.239 17:05:59 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:44.239 17:05:59 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:44.239 17:05:59 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:44.239 17:05:59 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:44.239 17:05:59 -- nvmf/common.sh@656 -- # echo 1 00:18:44.239 17:05:59 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:18:44.239 17:05:59 -- nvmf/common.sh@658 -- # echo 1 00:18:44.239 17:05:59 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:18:44.239 17:05:59 -- nvmf/common.sh@661 -- # echo tcp 00:18:44.239 17:05:59 -- nvmf/common.sh@662 -- # echo 4420 00:18:44.239 17:05:59 -- nvmf/common.sh@663 -- # echo ipv4 00:18:44.239 17:05:59 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:44.239 17:05:59 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:18:44.239 00:18:44.239 Discovery Log Number of Records 2, Generation counter 2 00:18:44.239 =====Discovery Log Entry 0====== 00:18:44.239 trtype: tcp 00:18:44.239 adrfam: ipv4 00:18:44.239 subtype: current discovery subsystem 00:18:44.239 treq: not specified, sq flow control disable supported 00:18:44.239 portid: 1 00:18:44.239 trsvcid: 4420 00:18:44.239 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:44.239 traddr: 10.0.0.1 00:18:44.239 eflags: none 00:18:44.239 sectype: none 00:18:44.239 =====Discovery Log Entry 1====== 00:18:44.239 trtype: tcp 00:18:44.239 adrfam: ipv4 00:18:44.239 subtype: nvme subsystem 00:18:44.239 treq: not specified, sq flow control disable supported 00:18:44.239 portid: 1 00:18:44.239 trsvcid: 4420 00:18:44.239 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:44.239 traddr: 10.0.0.1 00:18:44.239 eflags: none 00:18:44.239 sectype: none 00:18:44.239 17:05:59 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:44.239 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:44.239 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.239 ===================================================== 00:18:44.239 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:44.239 ===================================================== 00:18:44.239 Controller Capabilities/Features 00:18:44.239 ================================ 00:18:44.239 Vendor ID: 0000 00:18:44.239 Subsystem Vendor ID: 0000 00:18:44.239 Serial Number: 8b2e00292a9fa0cdd1a7 00:18:44.239 Model Number: Linux 00:18:44.239 Firmware Version: 6.7.0-68 00:18:44.239 Recommended Arb Burst: 0 00:18:44.239 IEEE OUI Identifier: 00 00 00 00:18:44.239 Multi-path I/O 00:18:44.239 May have multiple subsystem ports: No 00:18:44.239 May have multiple controllers: No 00:18:44.239 Associated with SR-IOV VF: No 00:18:44.239 Max Data Transfer Size: Unlimited 00:18:44.239 Max Number of Namespaces: 0 00:18:44.239 Max Number of I/O Queues: 1024 00:18:44.239 NVMe Specification Version (VS): 1.3 00:18:44.239 NVMe Specification Version (Identify): 1.3 00:18:44.239 Maximum Queue Entries: 1024 00:18:44.239 Contiguous Queues Required: No 00:18:44.239 Arbitration Mechanisms Supported 00:18:44.239 Weighted Round Robin: Not Supported 00:18:44.239 Vendor Specific: Not Supported 00:18:44.239 Reset Timeout: 7500 ms 00:18:44.239 Doorbell Stride: 4 bytes 00:18:44.239 NVM Subsystem Reset: Not Supported 00:18:44.239 Command Sets Supported 00:18:44.239 NVM Command Set: Supported 00:18:44.239 Boot Partition: Not Supported 00:18:44.239 Memory Page Size Minimum: 4096 bytes 00:18:44.239 Memory Page Size Maximum: 4096 bytes 00:18:44.239 Persistent Memory Region: Not Supported 00:18:44.239 Optional Asynchronous Events Supported 00:18:44.239 Namespace Attribute Notices: Not Supported 00:18:44.239 Firmware Activation Notices: Not Supported 00:18:44.239 ANA Change Notices: Not Supported 00:18:44.239 PLE Aggregate Log Change Notices: Not Supported 00:18:44.239 LBA Status Info Alert Notices: Not Supported 00:18:44.239 EGE Aggregate Log Change Notices: Not Supported 00:18:44.239 Normal NVM Subsystem Shutdown event: Not Supported 00:18:44.239 Zone Descriptor Change Notices: Not Supported 00:18:44.239 Discovery Log Change Notices: Supported 00:18:44.239 Controller Attributes 00:18:44.239 128-bit Host Identifier: Not Supported 00:18:44.239 Non-Operational Permissive Mode: Not Supported 00:18:44.239 NVM Sets: Not Supported 00:18:44.239 Read Recovery Levels: Not Supported 00:18:44.239 Endurance Groups: Not Supported 00:18:44.239 Predictable Latency Mode: Not Supported 00:18:44.239 Traffic Based Keep ALive: Not Supported 00:18:44.239 Namespace Granularity: Not Supported 00:18:44.239 SQ Associations: Not Supported 00:18:44.239 UUID List: Not Supported 00:18:44.239 Multi-Domain Subsystem: Not Supported 00:18:44.239 Fixed Capacity Management: Not Supported 00:18:44.240 Variable Capacity Management: Not Supported 00:18:44.240 Delete Endurance Group: Not Supported 00:18:44.240 Delete NVM Set: Not Supported 00:18:44.240 Extended LBA Formats Supported: Not Supported 00:18:44.240 Flexible Data Placement Supported: Not Supported 00:18:44.240 00:18:44.240 Controller Memory Buffer Support 00:18:44.240 ================================ 00:18:44.240 Supported: No 00:18:44.240 00:18:44.240 Persistent Memory Region Support 00:18:44.240 ================================ 00:18:44.240 Supported: No 00:18:44.240 00:18:44.240 Admin Command Set Attributes 00:18:44.240 ============================ 00:18:44.240 Security Send/Receive: Not Supported 00:18:44.240 Format NVM: Not Supported 00:18:44.240 Firmware Activate/Download: Not Supported 00:18:44.240 Namespace Management: Not Supported 00:18:44.240 Device Self-Test: Not Supported 00:18:44.240 Directives: Not Supported 00:18:44.240 NVMe-MI: Not Supported 00:18:44.240 Virtualization Management: Not Supported 00:18:44.240 Doorbell Buffer Config: Not Supported 00:18:44.240 Get LBA Status Capability: Not Supported 00:18:44.240 Command & Feature Lockdown Capability: Not Supported 00:18:44.240 Abort Command Limit: 1 00:18:44.240 Async Event Request Limit: 1 00:18:44.240 Number of Firmware Slots: N/A 00:18:44.240 Firmware Slot 1 Read-Only: N/A 00:18:44.240 Firmware Activation Without Reset: N/A 00:18:44.240 Multiple Update Detection Support: N/A 00:18:44.240 Firmware Update Granularity: No Information Provided 00:18:44.240 Per-Namespace SMART Log: No 00:18:44.240 Asymmetric Namespace Access Log Page: Not Supported 00:18:44.240 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:44.240 Command Effects Log Page: Not Supported 00:18:44.240 Get Log Page Extended Data: Supported 00:18:44.240 Telemetry Log Pages: Not Supported 00:18:44.240 Persistent Event Log Pages: Not Supported 00:18:44.240 Supported Log Pages Log Page: May Support 00:18:44.240 Commands Supported & Effects Log Page: Not Supported 00:18:44.240 Feature Identifiers & Effects Log Page:May Support 00:18:44.240 NVMe-MI Commands & Effects Log Page: May Support 00:18:44.240 Data Area 4 for Telemetry Log: Not Supported 00:18:44.240 Error Log Page Entries Supported: 1 00:18:44.240 Keep Alive: Not Supported 00:18:44.240 00:18:44.240 NVM Command Set Attributes 00:18:44.240 ========================== 00:18:44.240 Submission Queue Entry Size 00:18:44.240 Max: 1 00:18:44.240 Min: 1 00:18:44.240 Completion Queue Entry Size 00:18:44.240 Max: 1 00:18:44.240 Min: 1 00:18:44.240 Number of Namespaces: 0 00:18:44.240 Compare Command: Not Supported 00:18:44.240 Write Uncorrectable Command: Not Supported 00:18:44.240 Dataset Management Command: Not Supported 00:18:44.240 Write Zeroes Command: Not Supported 00:18:44.240 Set Features Save Field: Not Supported 00:18:44.240 Reservations: Not Supported 00:18:44.240 Timestamp: Not Supported 00:18:44.240 Copy: Not Supported 00:18:44.240 Volatile Write Cache: Not Present 00:18:44.240 Atomic Write Unit (Normal): 1 00:18:44.240 Atomic Write Unit (PFail): 1 00:18:44.240 Atomic Compare & Write Unit: 1 00:18:44.240 Fused Compare & Write: Not Supported 00:18:44.240 Scatter-Gather List 00:18:44.240 SGL Command Set: Supported 00:18:44.240 SGL Keyed: Not Supported 00:18:44.240 SGL Bit Bucket Descriptor: Not Supported 00:18:44.240 SGL Metadata Pointer: Not Supported 00:18:44.240 Oversized SGL: Not Supported 00:18:44.240 SGL Metadata Address: Not Supported 00:18:44.240 SGL Offset: Supported 00:18:44.240 Transport SGL Data Block: Not Supported 00:18:44.240 Replay Protected Memory Block: Not Supported 00:18:44.240 00:18:44.240 Firmware Slot Information 00:18:44.240 ========================= 00:18:44.240 Active slot: 0 00:18:44.240 00:18:44.240 00:18:44.240 Error Log 00:18:44.240 ========= 00:18:44.240 00:18:44.240 Active Namespaces 00:18:44.240 ================= 00:18:44.240 Discovery Log Page 00:18:44.240 ================== 00:18:44.240 Generation Counter: 2 00:18:44.240 Number of Records: 2 00:18:44.240 Record Format: 0 00:18:44.240 00:18:44.240 Discovery Log Entry 0 00:18:44.240 ---------------------- 00:18:44.240 Transport Type: 3 (TCP) 00:18:44.240 Address Family: 1 (IPv4) 00:18:44.240 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:44.240 Entry Flags: 00:18:44.240 Duplicate Returned Information: 0 00:18:44.240 Explicit Persistent Connection Support for Discovery: 0 00:18:44.240 Transport Requirements: 00:18:44.240 Secure Channel: Not Specified 00:18:44.240 Port ID: 1 (0x0001) 00:18:44.240 Controller ID: 65535 (0xffff) 00:18:44.240 Admin Max SQ Size: 32 00:18:44.240 Transport Service Identifier: 4420 00:18:44.240 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:44.240 Transport Address: 10.0.0.1 00:18:44.240 Discovery Log Entry 1 00:18:44.240 ---------------------- 00:18:44.240 Transport Type: 3 (TCP) 00:18:44.240 Address Family: 1 (IPv4) 00:18:44.240 Subsystem Type: 2 (NVM Subsystem) 00:18:44.240 Entry Flags: 00:18:44.240 Duplicate Returned Information: 0 00:18:44.240 Explicit Persistent Connection Support for Discovery: 0 00:18:44.240 Transport Requirements: 00:18:44.240 Secure Channel: Not Specified 00:18:44.240 Port ID: 1 (0x0001) 00:18:44.240 Controller ID: 65535 (0xffff) 00:18:44.240 Admin Max SQ Size: 32 00:18:44.240 Transport Service Identifier: 4420 00:18:44.240 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:44.240 Transport Address: 10.0.0.1 00:18:44.240 17:05:59 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:44.240 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.240 get_feature(0x01) failed 00:18:44.240 get_feature(0x02) failed 00:18:44.240 get_feature(0x04) failed 00:18:44.240 ===================================================== 00:18:44.240 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:44.240 ===================================================== 00:18:44.240 Controller Capabilities/Features 00:18:44.240 ================================ 00:18:44.240 Vendor ID: 0000 00:18:44.240 Subsystem Vendor ID: 0000 00:18:44.240 Serial Number: d86b37381a40b6ac30e9 00:18:44.240 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:44.240 Firmware Version: 6.7.0-68 00:18:44.240 Recommended Arb Burst: 6 00:18:44.240 IEEE OUI Identifier: 00 00 00 00:18:44.240 Multi-path I/O 00:18:44.240 May have multiple subsystem ports: Yes 00:18:44.240 May have multiple controllers: Yes 00:18:44.240 Associated with SR-IOV VF: No 00:18:44.240 Max Data Transfer Size: Unlimited 00:18:44.240 Max Number of Namespaces: 1024 00:18:44.240 Max Number of I/O Queues: 128 00:18:44.240 NVMe Specification Version (VS): 1.3 00:18:44.240 NVMe Specification Version (Identify): 1.3 00:18:44.240 Maximum Queue Entries: 1024 00:18:44.240 Contiguous Queues Required: No 00:18:44.240 Arbitration Mechanisms Supported 00:18:44.240 Weighted Round Robin: Not Supported 00:18:44.240 Vendor Specific: Not Supported 00:18:44.240 Reset Timeout: 7500 ms 00:18:44.240 Doorbell Stride: 4 bytes 00:18:44.240 NVM Subsystem Reset: Not Supported 00:18:44.240 Command Sets Supported 00:18:44.240 NVM Command Set: Supported 00:18:44.240 Boot Partition: Not Supported 00:18:44.240 Memory Page Size Minimum: 4096 bytes 00:18:44.240 Memory Page Size Maximum: 4096 bytes 00:18:44.240 Persistent Memory Region: Not Supported 00:18:44.240 Optional Asynchronous Events Supported 00:18:44.240 Namespace Attribute Notices: Supported 00:18:44.240 Firmware Activation Notices: Not Supported 00:18:44.240 ANA Change Notices: Supported 00:18:44.240 PLE Aggregate Log Change Notices: Not Supported 00:18:44.240 LBA Status Info Alert Notices: Not Supported 00:18:44.240 EGE Aggregate Log Change Notices: Not Supported 00:18:44.240 Normal NVM Subsystem Shutdown event: Not Supported 00:18:44.240 Zone Descriptor Change Notices: Not Supported 00:18:44.240 Discovery Log Change Notices: Not Supported 00:18:44.240 Controller Attributes 00:18:44.240 128-bit Host Identifier: Supported 00:18:44.240 Non-Operational Permissive Mode: Not Supported 00:18:44.240 NVM Sets: Not Supported 00:18:44.240 Read Recovery Levels: Not Supported 00:18:44.240 Endurance Groups: Not Supported 00:18:44.240 Predictable Latency Mode: Not Supported 00:18:44.240 Traffic Based Keep ALive: Supported 00:18:44.240 Namespace Granularity: Not Supported 00:18:44.240 SQ Associations: Not Supported 00:18:44.240 UUID List: Not Supported 00:18:44.240 Multi-Domain Subsystem: Not Supported 00:18:44.240 Fixed Capacity Management: Not Supported 00:18:44.240 Variable Capacity Management: Not Supported 00:18:44.240 Delete Endurance Group: Not Supported 00:18:44.240 Delete NVM Set: Not Supported 00:18:44.240 Extended LBA Formats Supported: Not Supported 00:18:44.241 Flexible Data Placement Supported: Not Supported 00:18:44.241 00:18:44.241 Controller Memory Buffer Support 00:18:44.241 ================================ 00:18:44.241 Supported: No 00:18:44.241 00:18:44.241 Persistent Memory Region Support 00:18:44.241 ================================ 00:18:44.241 Supported: No 00:18:44.241 00:18:44.241 Admin Command Set Attributes 00:18:44.241 ============================ 00:18:44.241 Security Send/Receive: Not Supported 00:18:44.241 Format NVM: Not Supported 00:18:44.241 Firmware Activate/Download: Not Supported 00:18:44.241 Namespace Management: Not Supported 00:18:44.241 Device Self-Test: Not Supported 00:18:44.241 Directives: Not Supported 00:18:44.241 NVMe-MI: Not Supported 00:18:44.241 Virtualization Management: Not Supported 00:18:44.241 Doorbell Buffer Config: Not Supported 00:18:44.241 Get LBA Status Capability: Not Supported 00:18:44.241 Command & Feature Lockdown Capability: Not Supported 00:18:44.241 Abort Command Limit: 4 00:18:44.241 Async Event Request Limit: 4 00:18:44.241 Number of Firmware Slots: N/A 00:18:44.241 Firmware Slot 1 Read-Only: N/A 00:18:44.241 Firmware Activation Without Reset: N/A 00:18:44.241 Multiple Update Detection Support: N/A 00:18:44.241 Firmware Update Granularity: No Information Provided 00:18:44.241 Per-Namespace SMART Log: Yes 00:18:44.241 Asymmetric Namespace Access Log Page: Supported 00:18:44.241 ANA Transition Time : 10 sec 00:18:44.241 00:18:44.241 Asymmetric Namespace Access Capabilities 00:18:44.241 ANA Optimized State : Supported 00:18:44.241 ANA Non-Optimized State : Supported 00:18:44.241 ANA Inaccessible State : Supported 00:18:44.241 ANA Persistent Loss State : Supported 00:18:44.241 ANA Change State : Supported 00:18:44.241 ANAGRPID is not changed : No 00:18:44.241 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:44.241 00:18:44.241 ANA Group Identifier Maximum : 128 00:18:44.241 Number of ANA Group Identifiers : 128 00:18:44.241 Max Number of Allowed Namespaces : 1024 00:18:44.241 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:44.241 Command Effects Log Page: Supported 00:18:44.241 Get Log Page Extended Data: Supported 00:18:44.241 Telemetry Log Pages: Not Supported 00:18:44.241 Persistent Event Log Pages: Not Supported 00:18:44.241 Supported Log Pages Log Page: May Support 00:18:44.241 Commands Supported & Effects Log Page: Not Supported 00:18:44.241 Feature Identifiers & Effects Log Page:May Support 00:18:44.241 NVMe-MI Commands & Effects Log Page: May Support 00:18:44.241 Data Area 4 for Telemetry Log: Not Supported 00:18:44.241 Error Log Page Entries Supported: 128 00:18:44.241 Keep Alive: Supported 00:18:44.241 Keep Alive Granularity: 1000 ms 00:18:44.241 00:18:44.241 NVM Command Set Attributes 00:18:44.241 ========================== 00:18:44.241 Submission Queue Entry Size 00:18:44.241 Max: 64 00:18:44.241 Min: 64 00:18:44.241 Completion Queue Entry Size 00:18:44.241 Max: 16 00:18:44.241 Min: 16 00:18:44.241 Number of Namespaces: 1024 00:18:44.241 Compare Command: Not Supported 00:18:44.241 Write Uncorrectable Command: Not Supported 00:18:44.241 Dataset Management Command: Supported 00:18:44.241 Write Zeroes Command: Supported 00:18:44.241 Set Features Save Field: Not Supported 00:18:44.241 Reservations: Not Supported 00:18:44.241 Timestamp: Not Supported 00:18:44.241 Copy: Not Supported 00:18:44.241 Volatile Write Cache: Present 00:18:44.241 Atomic Write Unit (Normal): 1 00:18:44.241 Atomic Write Unit (PFail): 1 00:18:44.241 Atomic Compare & Write Unit: 1 00:18:44.241 Fused Compare & Write: Not Supported 00:18:44.241 Scatter-Gather List 00:18:44.241 SGL Command Set: Supported 00:18:44.241 SGL Keyed: Not Supported 00:18:44.241 SGL Bit Bucket Descriptor: Not Supported 00:18:44.241 SGL Metadata Pointer: Not Supported 00:18:44.241 Oversized SGL: Not Supported 00:18:44.241 SGL Metadata Address: Not Supported 00:18:44.241 SGL Offset: Supported 00:18:44.241 Transport SGL Data Block: Not Supported 00:18:44.241 Replay Protected Memory Block: Not Supported 00:18:44.241 00:18:44.241 Firmware Slot Information 00:18:44.241 ========================= 00:18:44.241 Active slot: 0 00:18:44.241 00:18:44.241 Asymmetric Namespace Access 00:18:44.241 =========================== 00:18:44.241 Change Count : 0 00:18:44.241 Number of ANA Group Descriptors : 1 00:18:44.241 ANA Group Descriptor : 0 00:18:44.241 ANA Group ID : 1 00:18:44.241 Number of NSID Values : 1 00:18:44.241 Change Count : 0 00:18:44.241 ANA State : 1 00:18:44.241 Namespace Identifier : 1 00:18:44.241 00:18:44.241 Commands Supported and Effects 00:18:44.241 ============================== 00:18:44.241 Admin Commands 00:18:44.241 -------------- 00:18:44.241 Get Log Page (02h): Supported 00:18:44.241 Identify (06h): Supported 00:18:44.241 Abort (08h): Supported 00:18:44.241 Set Features (09h): Supported 00:18:44.241 Get Features (0Ah): Supported 00:18:44.241 Asynchronous Event Request (0Ch): Supported 00:18:44.241 Keep Alive (18h): Supported 00:18:44.241 I/O Commands 00:18:44.241 ------------ 00:18:44.241 Flush (00h): Supported 00:18:44.241 Write (01h): Supported LBA-Change 00:18:44.241 Read (02h): Supported 00:18:44.241 Write Zeroes (08h): Supported LBA-Change 00:18:44.241 Dataset Management (09h): Supported 00:18:44.241 00:18:44.241 Error Log 00:18:44.241 ========= 00:18:44.241 Entry: 0 00:18:44.241 Error Count: 0x3 00:18:44.241 Submission Queue Id: 0x0 00:18:44.241 Command Id: 0x5 00:18:44.241 Phase Bit: 0 00:18:44.241 Status Code: 0x2 00:18:44.241 Status Code Type: 0x0 00:18:44.241 Do Not Retry: 1 00:18:44.241 Error Location: 0x28 00:18:44.241 LBA: 0x0 00:18:44.241 Namespace: 0x0 00:18:44.241 Vendor Log Page: 0x0 00:18:44.241 ----------- 00:18:44.241 Entry: 1 00:18:44.241 Error Count: 0x2 00:18:44.241 Submission Queue Id: 0x0 00:18:44.241 Command Id: 0x5 00:18:44.241 Phase Bit: 0 00:18:44.241 Status Code: 0x2 00:18:44.241 Status Code Type: 0x0 00:18:44.241 Do Not Retry: 1 00:18:44.241 Error Location: 0x28 00:18:44.241 LBA: 0x0 00:18:44.241 Namespace: 0x0 00:18:44.241 Vendor Log Page: 0x0 00:18:44.241 ----------- 00:18:44.241 Entry: 2 00:18:44.241 Error Count: 0x1 00:18:44.241 Submission Queue Id: 0x0 00:18:44.241 Command Id: 0x4 00:18:44.241 Phase Bit: 0 00:18:44.241 Status Code: 0x2 00:18:44.241 Status Code Type: 0x0 00:18:44.241 Do Not Retry: 1 00:18:44.241 Error Location: 0x28 00:18:44.241 LBA: 0x0 00:18:44.241 Namespace: 0x0 00:18:44.241 Vendor Log Page: 0x0 00:18:44.241 00:18:44.241 Number of Queues 00:18:44.241 ================ 00:18:44.241 Number of I/O Submission Queues: 128 00:18:44.241 Number of I/O Completion Queues: 128 00:18:44.241 00:18:44.241 ZNS Specific Controller Data 00:18:44.241 ============================ 00:18:44.241 Zone Append Size Limit: 0 00:18:44.241 00:18:44.241 00:18:44.241 Active Namespaces 00:18:44.241 ================= 00:18:44.241 get_feature(0x05) failed 00:18:44.241 Namespace ID:1 00:18:44.241 Command Set Identifier: NVM (00h) 00:18:44.241 Deallocate: Supported 00:18:44.241 Deallocated/Unwritten Error: Not Supported 00:18:44.241 Deallocated Read Value: Unknown 00:18:44.241 Deallocate in Write Zeroes: Not Supported 00:18:44.241 Deallocated Guard Field: 0xFFFF 00:18:44.241 Flush: Supported 00:18:44.241 Reservation: Not Supported 00:18:44.241 Namespace Sharing Capabilities: Multiple Controllers 00:18:44.241 Size (in LBAs): 1953525168 (931GiB) 00:18:44.241 Capacity (in LBAs): 1953525168 (931GiB) 00:18:44.241 Utilization (in LBAs): 1953525168 (931GiB) 00:18:44.241 UUID: c86a4957-1c59-432a-8176-ecc16428099c 00:18:44.241 Thin Provisioning: Not Supported 00:18:44.241 Per-NS Atomic Units: Yes 00:18:44.241 Atomic Boundary Size (Normal): 0 00:18:44.241 Atomic Boundary Size (PFail): 0 00:18:44.241 Atomic Boundary Offset: 0 00:18:44.241 NGUID/EUI64 Never Reused: No 00:18:44.241 ANA group ID: 1 00:18:44.241 Namespace Write Protected: No 00:18:44.241 Number of LBA Formats: 1 00:18:44.241 Current LBA Format: LBA Format #00 00:18:44.241 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:44.241 00:18:44.241 17:05:59 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:44.241 17:05:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:44.241 17:05:59 -- nvmf/common.sh@117 -- # sync 00:18:44.241 17:05:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:44.242 17:05:59 -- nvmf/common.sh@120 -- # set +e 00:18:44.242 17:05:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:44.242 17:05:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:44.242 rmmod nvme_tcp 00:18:44.242 rmmod nvme_fabrics 00:18:44.242 17:05:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.242 17:05:59 -- nvmf/common.sh@124 -- # set -e 00:18:44.242 17:05:59 -- nvmf/common.sh@125 -- # return 0 00:18:44.242 17:05:59 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:44.242 17:05:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:44.242 17:05:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:44.242 17:05:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:44.242 17:05:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.242 17:05:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.242 17:05:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.242 17:05:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.242 17:05:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.776 17:06:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.776 17:06:01 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:46.776 17:06:01 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:46.776 17:06:01 -- nvmf/common.sh@675 -- # echo 0 00:18:46.776 17:06:01 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:46.776 17:06:01 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:46.776 17:06:01 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:46.776 17:06:01 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:46.776 17:06:01 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:18:46.776 17:06:01 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:18:46.776 17:06:02 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:18:47.342 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:18:47.601 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:18:47.601 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:18:47.601 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:18:47.601 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:18:47.601 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:18:47.601 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:18:47.601 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:18:47.601 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:18:47.601 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:18:47.601 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:18:47.601 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:18:47.601 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:18:47.601 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:18:47.601 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:18:47.601 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:18:48.538 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:18:48.538 00:18:48.538 real 0m8.830s 00:18:48.538 user 0m1.760s 00:18:48.538 sys 0m3.228s 00:18:48.538 17:06:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:48.538 17:06:04 -- common/autotest_common.sh@10 -- # set +x 00:18:48.538 ************************************ 00:18:48.538 END TEST nvmf_identify_kernel_target 00:18:48.538 ************************************ 00:18:48.538 17:06:04 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:48.538 17:06:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:48.538 17:06:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:48.538 17:06:04 -- common/autotest_common.sh@10 -- # set +x 00:18:48.796 ************************************ 00:18:48.796 START TEST nvmf_auth 00:18:48.796 ************************************ 00:18:48.796 17:06:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:48.796 * Looking for test storage... 00:18:48.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:48.796 17:06:04 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.796 17:06:04 -- nvmf/common.sh@7 -- # uname -s 00:18:48.796 17:06:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.796 17:06:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.796 17:06:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.796 17:06:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.796 17:06:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.796 17:06:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.796 17:06:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.796 17:06:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.796 17:06:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.796 17:06:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.796 17:06:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.796 17:06:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.796 17:06:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.796 17:06:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.796 17:06:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.796 17:06:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.796 17:06:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.796 17:06:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.796 17:06:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.796 17:06:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.796 17:06:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.796 17:06:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.796 17:06:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.796 17:06:04 -- paths/export.sh@5 -- # export PATH 00:18:48.796 17:06:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.796 17:06:04 -- nvmf/common.sh@47 -- # : 0 00:18:48.796 17:06:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.796 17:06:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.796 17:06:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.796 17:06:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.796 17:06:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.796 17:06:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.796 17:06:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.796 17:06:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.796 17:06:04 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:48.796 17:06:04 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:48.796 17:06:04 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:48.796 17:06:04 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:48.796 17:06:04 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:48.796 17:06:04 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:48.796 17:06:04 -- host/auth.sh@21 -- # keys=() 00:18:48.796 17:06:04 -- host/auth.sh@77 -- # nvmftestinit 00:18:48.796 17:06:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:48.796 17:06:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.796 17:06:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:48.796 17:06:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:48.796 17:06:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:48.796 17:06:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.796 17:06:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.796 17:06:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.796 17:06:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:48.796 17:06:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:48.797 17:06:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:48.797 17:06:04 -- common/autotest_common.sh@10 -- # set +x 00:18:50.696 17:06:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:50.696 17:06:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:50.696 17:06:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:50.696 17:06:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:50.696 17:06:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:50.696 17:06:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:50.696 17:06:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:50.696 17:06:06 -- nvmf/common.sh@295 -- # net_devs=() 00:18:50.696 17:06:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:50.696 17:06:06 -- nvmf/common.sh@296 -- # e810=() 00:18:50.696 17:06:06 -- nvmf/common.sh@296 -- # local -ga e810 00:18:50.696 17:06:06 -- nvmf/common.sh@297 -- # x722=() 00:18:50.696 17:06:06 -- nvmf/common.sh@297 -- # local -ga x722 00:18:50.696 17:06:06 -- nvmf/common.sh@298 -- # mlx=() 00:18:50.696 17:06:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:50.696 17:06:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.696 17:06:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.696 17:06:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.696 17:06:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.696 17:06:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.696 17:06:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.696 17:06:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.696 17:06:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.696 17:06:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.696 17:06:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.696 17:06:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.696 17:06:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:50.696 17:06:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:50.696 17:06:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:50.696 17:06:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.696 17:06:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:50.696 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:50.696 17:06:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.696 17:06:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:50.696 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:50.696 17:06:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:50.696 17:06:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.696 17:06:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.696 17:06:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:50.696 17:06:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.696 17:06:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:50.696 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:50.696 17:06:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.696 17:06:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.696 17:06:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.696 17:06:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:50.696 17:06:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.696 17:06:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:50.696 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:50.696 17:06:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.696 17:06:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:50.696 17:06:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:50.696 17:06:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:50.696 17:06:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:50.696 17:06:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.696 17:06:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.696 17:06:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.696 17:06:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:50.696 17:06:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.696 17:06:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.696 17:06:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:50.696 17:06:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.696 17:06:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.696 17:06:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:50.696 17:06:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:50.696 17:06:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.696 17:06:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.696 17:06:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.696 17:06:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.696 17:06:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:50.696 17:06:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.954 17:06:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.954 17:06:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.954 17:06:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:50.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:18:50.954 00:18:50.954 --- 10.0.0.2 ping statistics --- 00:18:50.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.954 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:18:50.954 17:06:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:18:50.954 00:18:50.954 --- 10.0.0.1 ping statistics --- 00:18:50.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.954 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:18:50.954 17:06:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.954 17:06:06 -- nvmf/common.sh@411 -- # return 0 00:18:50.954 17:06:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:50.954 17:06:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.954 17:06:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:50.954 17:06:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:50.954 17:06:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.954 17:06:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:50.954 17:06:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:50.954 17:06:06 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:18:50.954 17:06:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:50.954 17:06:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:50.954 17:06:06 -- common/autotest_common.sh@10 -- # set +x 00:18:50.954 17:06:06 -- nvmf/common.sh@470 -- # nvmfpid=1745767 00:18:50.954 17:06:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:50.955 17:06:06 -- nvmf/common.sh@471 -- # waitforlisten 1745767 00:18:50.955 17:06:06 -- common/autotest_common.sh@817 -- # '[' -z 1745767 ']' 00:18:50.955 17:06:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.955 17:06:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:50.955 17:06:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.955 17:06:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:50.955 17:06:06 -- common/autotest_common.sh@10 -- # set +x 00:18:51.212 17:06:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:51.212 17:06:06 -- common/autotest_common.sh@850 -- # return 0 00:18:51.212 17:06:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:51.212 17:06:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:51.212 17:06:06 -- common/autotest_common.sh@10 -- # set +x 00:18:51.212 17:06:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.212 17:06:06 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:51.212 17:06:06 -- host/auth.sh@81 -- # gen_key null 32 00:18:51.212 17:06:06 -- host/auth.sh@53 -- # local digest len file key 00:18:51.212 17:06:06 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.212 17:06:06 -- host/auth.sh@54 -- # local -A digests 00:18:51.212 17:06:06 -- host/auth.sh@56 -- # digest=null 00:18:51.212 17:06:06 -- host/auth.sh@56 -- # len=32 00:18:51.213 17:06:06 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:51.213 17:06:06 -- host/auth.sh@57 -- # key=b10a91452a351f2f6b0b10e0a099a379 00:18:51.213 17:06:06 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:18:51.213 17:06:06 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.4gI 00:18:51.213 17:06:06 -- host/auth.sh@59 -- # format_dhchap_key b10a91452a351f2f6b0b10e0a099a379 0 00:18:51.213 17:06:06 -- nvmf/common.sh@708 -- # format_key DHHC-1 b10a91452a351f2f6b0b10e0a099a379 0 00:18:51.213 17:06:06 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:51.213 17:06:06 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:51.213 17:06:06 -- nvmf/common.sh@693 -- # key=b10a91452a351f2f6b0b10e0a099a379 00:18:51.213 17:06:06 -- nvmf/common.sh@693 -- # digest=0 00:18:51.213 17:06:06 -- nvmf/common.sh@694 -- # python - 00:18:51.213 17:06:06 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.4gI 00:18:51.213 17:06:06 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.4gI 00:18:51.213 17:06:06 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.4gI 00:18:51.213 17:06:06 -- host/auth.sh@82 -- # gen_key null 48 00:18:51.213 17:06:06 -- host/auth.sh@53 -- # local digest len file key 00:18:51.213 17:06:06 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.213 17:06:06 -- host/auth.sh@54 -- # local -A digests 00:18:51.213 17:06:06 -- host/auth.sh@56 -- # digest=null 00:18:51.213 17:06:06 -- host/auth.sh@56 -- # len=48 00:18:51.213 17:06:06 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:51.213 17:06:06 -- host/auth.sh@57 -- # key=74862518795861adc5fe279dab93a3a07c5d1b632b306aae 00:18:51.213 17:06:06 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:18:51.213 17:06:06 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.JtK 00:18:51.213 17:06:06 -- host/auth.sh@59 -- # format_dhchap_key 74862518795861adc5fe279dab93a3a07c5d1b632b306aae 0 00:18:51.213 17:06:06 -- nvmf/common.sh@708 -- # format_key DHHC-1 74862518795861adc5fe279dab93a3a07c5d1b632b306aae 0 00:18:51.213 17:06:06 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:51.213 17:06:06 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:51.213 17:06:06 -- nvmf/common.sh@693 -- # key=74862518795861adc5fe279dab93a3a07c5d1b632b306aae 00:18:51.213 17:06:06 -- nvmf/common.sh@693 -- # digest=0 00:18:51.213 17:06:06 -- nvmf/common.sh@694 -- # python - 00:18:51.471 17:06:06 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.JtK 00:18:51.471 17:06:06 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.JtK 00:18:51.471 17:06:06 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.JtK 00:18:51.471 17:06:06 -- host/auth.sh@83 -- # gen_key sha256 32 00:18:51.471 17:06:06 -- host/auth.sh@53 -- # local digest len file key 00:18:51.471 17:06:06 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.471 17:06:06 -- host/auth.sh@54 -- # local -A digests 00:18:51.471 17:06:06 -- host/auth.sh@56 -- # digest=sha256 00:18:51.471 17:06:06 -- host/auth.sh@56 -- # len=32 00:18:51.471 17:06:06 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:51.471 17:06:06 -- host/auth.sh@57 -- # key=3dc33e5acd19d71127f3618f2740a7d8 00:18:51.471 17:06:06 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:18:51.471 17:06:06 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.6eR 00:18:51.471 17:06:06 -- host/auth.sh@59 -- # format_dhchap_key 3dc33e5acd19d71127f3618f2740a7d8 1 00:18:51.471 17:06:06 -- nvmf/common.sh@708 -- # format_key DHHC-1 3dc33e5acd19d71127f3618f2740a7d8 1 00:18:51.471 17:06:06 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:51.471 17:06:06 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:51.471 17:06:06 -- nvmf/common.sh@693 -- # key=3dc33e5acd19d71127f3618f2740a7d8 00:18:51.471 17:06:06 -- nvmf/common.sh@693 -- # digest=1 00:18:51.471 17:06:06 -- nvmf/common.sh@694 -- # python - 00:18:51.471 17:06:07 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.6eR 00:18:51.471 17:06:07 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.6eR 00:18:51.471 17:06:07 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.6eR 00:18:51.471 17:06:07 -- host/auth.sh@84 -- # gen_key sha384 48 00:18:51.471 17:06:07 -- host/auth.sh@53 -- # local digest len file key 00:18:51.471 17:06:07 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.471 17:06:07 -- host/auth.sh@54 -- # local -A digests 00:18:51.471 17:06:07 -- host/auth.sh@56 -- # digest=sha384 00:18:51.471 17:06:07 -- host/auth.sh@56 -- # len=48 00:18:51.471 17:06:07 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:51.471 17:06:07 -- host/auth.sh@57 -- # key=8f177b43868c61295c6d716e2fee69d43ec44f0b54c4bcb3 00:18:51.471 17:06:07 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:18:51.471 17:06:07 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.Kwb 00:18:51.471 17:06:07 -- host/auth.sh@59 -- # format_dhchap_key 8f177b43868c61295c6d716e2fee69d43ec44f0b54c4bcb3 2 00:18:51.471 17:06:07 -- nvmf/common.sh@708 -- # format_key DHHC-1 8f177b43868c61295c6d716e2fee69d43ec44f0b54c4bcb3 2 00:18:51.471 17:06:07 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:51.471 17:06:07 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:51.471 17:06:07 -- nvmf/common.sh@693 -- # key=8f177b43868c61295c6d716e2fee69d43ec44f0b54c4bcb3 00:18:51.471 17:06:07 -- nvmf/common.sh@693 -- # digest=2 00:18:51.471 17:06:07 -- nvmf/common.sh@694 -- # python - 00:18:51.471 17:06:07 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.Kwb 00:18:51.471 17:06:07 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.Kwb 00:18:51.471 17:06:07 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.Kwb 00:18:51.471 17:06:07 -- host/auth.sh@85 -- # gen_key sha512 64 00:18:51.471 17:06:07 -- host/auth.sh@53 -- # local digest len file key 00:18:51.471 17:06:07 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.471 17:06:07 -- host/auth.sh@54 -- # local -A digests 00:18:51.471 17:06:07 -- host/auth.sh@56 -- # digest=sha512 00:18:51.471 17:06:07 -- host/auth.sh@56 -- # len=64 00:18:51.471 17:06:07 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:51.471 17:06:07 -- host/auth.sh@57 -- # key=73507f7b75324d8f8765c8896ab452ea7b9aea615252c6e307e6fdcf5f1eb0b3 00:18:51.471 17:06:07 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:18:51.471 17:06:07 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.sT6 00:18:51.471 17:06:07 -- host/auth.sh@59 -- # format_dhchap_key 73507f7b75324d8f8765c8896ab452ea7b9aea615252c6e307e6fdcf5f1eb0b3 3 00:18:51.471 17:06:07 -- nvmf/common.sh@708 -- # format_key DHHC-1 73507f7b75324d8f8765c8896ab452ea7b9aea615252c6e307e6fdcf5f1eb0b3 3 00:18:51.471 17:06:07 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:51.471 17:06:07 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:51.471 17:06:07 -- nvmf/common.sh@693 -- # key=73507f7b75324d8f8765c8896ab452ea7b9aea615252c6e307e6fdcf5f1eb0b3 00:18:51.471 17:06:07 -- nvmf/common.sh@693 -- # digest=3 00:18:51.471 17:06:07 -- nvmf/common.sh@694 -- # python - 00:18:51.471 17:06:07 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.sT6 00:18:51.471 17:06:07 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.sT6 00:18:51.471 17:06:07 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.sT6 00:18:51.471 17:06:07 -- host/auth.sh@87 -- # waitforlisten 1745767 00:18:51.471 17:06:07 -- common/autotest_common.sh@817 -- # '[' -z 1745767 ']' 00:18:51.472 17:06:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.472 17:06:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:51.472 17:06:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.472 17:06:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:51.472 17:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:51.729 17:06:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:51.729 17:06:07 -- common/autotest_common.sh@850 -- # return 0 00:18:51.729 17:06:07 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:51.729 17:06:07 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4gI 00:18:51.729 17:06:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.729 17:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:51.729 17:06:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.729 17:06:07 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:51.729 17:06:07 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.JtK 00:18:51.729 17:06:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.729 17:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:51.729 17:06:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.729 17:06:07 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:51.729 17:06:07 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6eR 00:18:51.730 17:06:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.730 17:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:51.730 17:06:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.730 17:06:07 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:51.730 17:06:07 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Kwb 00:18:51.730 17:06:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.730 17:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:51.730 17:06:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.730 17:06:07 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:51.730 17:06:07 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.sT6 00:18:51.730 17:06:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.730 17:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:51.730 17:06:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.730 17:06:07 -- host/auth.sh@92 -- # nvmet_auth_init 00:18:51.730 17:06:07 -- host/auth.sh@35 -- # get_main_ns_ip 00:18:51.730 17:06:07 -- nvmf/common.sh@717 -- # local ip 00:18:51.730 17:06:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:51.730 17:06:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:51.730 17:06:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.730 17:06:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.730 17:06:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:51.730 17:06:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.730 17:06:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:51.730 17:06:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:51.730 17:06:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:51.730 17:06:07 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:51.730 17:06:07 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:51.730 17:06:07 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:18:51.730 17:06:07 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:51.730 17:06:07 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:51.730 17:06:07 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:51.730 17:06:07 -- nvmf/common.sh@628 -- # local block nvme 00:18:51.730 17:06:07 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:18:51.730 17:06:07 -- nvmf/common.sh@631 -- # modprobe nvmet 00:18:51.730 17:06:07 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:51.730 17:06:07 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:18:53.103 Waiting for block devices as requested 00:18:53.103 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:18:53.103 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:53.103 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:53.103 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:53.103 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:53.103 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:53.361 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:53.361 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:53.361 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:53.361 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:53.620 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:53.620 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:53.620 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:53.620 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:53.879 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:53.879 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:53.879 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:54.478 17:06:09 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:18:54.479 17:06:09 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:54.479 17:06:09 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:18:54.479 17:06:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:54.479 17:06:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:54.479 17:06:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:54.479 17:06:09 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:18:54.479 17:06:09 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:54.479 17:06:09 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:18:54.479 No valid GPT data, bailing 00:18:54.479 17:06:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:54.479 17:06:09 -- scripts/common.sh@391 -- # pt= 00:18:54.479 17:06:09 -- scripts/common.sh@392 -- # return 1 00:18:54.479 17:06:09 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:18:54.479 17:06:09 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:18:54.479 17:06:09 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:54.479 17:06:09 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:54.479 17:06:09 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:54.479 17:06:09 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:54.479 17:06:09 -- nvmf/common.sh@656 -- # echo 1 00:18:54.479 17:06:09 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:18:54.479 17:06:09 -- nvmf/common.sh@658 -- # echo 1 00:18:54.479 17:06:09 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:18:54.479 17:06:09 -- nvmf/common.sh@661 -- # echo tcp 00:18:54.479 17:06:10 -- nvmf/common.sh@662 -- # echo 4420 00:18:54.479 17:06:10 -- nvmf/common.sh@663 -- # echo ipv4 00:18:54.479 17:06:10 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:54.479 17:06:10 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:18:54.479 00:18:54.479 Discovery Log Number of Records 2, Generation counter 2 00:18:54.479 =====Discovery Log Entry 0====== 00:18:54.479 trtype: tcp 00:18:54.479 adrfam: ipv4 00:18:54.479 subtype: current discovery subsystem 00:18:54.479 treq: not specified, sq flow control disable supported 00:18:54.479 portid: 1 00:18:54.479 trsvcid: 4420 00:18:54.479 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:54.479 traddr: 10.0.0.1 00:18:54.479 eflags: none 00:18:54.479 sectype: none 00:18:54.479 =====Discovery Log Entry 1====== 00:18:54.479 trtype: tcp 00:18:54.479 adrfam: ipv4 00:18:54.479 subtype: nvme subsystem 00:18:54.479 treq: not specified, sq flow control disable supported 00:18:54.479 portid: 1 00:18:54.479 trsvcid: 4420 00:18:54.479 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:54.479 traddr: 10.0.0.1 00:18:54.479 eflags: none 00:18:54.479 sectype: none 00:18:54.479 17:06:10 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:54.479 17:06:10 -- host/auth.sh@37 -- # echo 0 00:18:54.479 17:06:10 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:54.479 17:06:10 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:54.479 17:06:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.479 17:06:10 -- host/auth.sh@44 -- # digest=sha256 00:18:54.479 17:06:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:54.479 17:06:10 -- host/auth.sh@44 -- # keyid=1 00:18:54.479 17:06:10 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:18:54.479 17:06:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.479 17:06:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:54.479 17:06:10 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:18:54.479 17:06:10 -- host/auth.sh@100 -- # IFS=, 00:18:54.479 17:06:10 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:18:54.479 17:06:10 -- host/auth.sh@100 -- # IFS=, 00:18:54.479 17:06:10 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:54.479 17:06:10 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:54.479 17:06:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.479 17:06:10 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:18:54.479 17:06:10 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:54.479 17:06:10 -- host/auth.sh@68 -- # keyid=1 00:18:54.479 17:06:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:54.479 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.479 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.479 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.479 17:06:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.479 17:06:10 -- nvmf/common.sh@717 -- # local ip 00:18:54.479 17:06:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.479 17:06:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.479 17:06:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.479 17:06:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.479 17:06:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.479 17:06:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.479 17:06:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.479 17:06:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.479 17:06:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:54.479 17:06:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:54.479 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.479 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.479 nvme0n1 00:18:54.479 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.479 17:06:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.479 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.479 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.479 17:06:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.479 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.739 17:06:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.739 17:06:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.739 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.739 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.739 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.739 17:06:10 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:18:54.739 17:06:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.739 17:06:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.739 17:06:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:54.739 17:06:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.739 17:06:10 -- host/auth.sh@44 -- # digest=sha256 00:18:54.739 17:06:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:54.739 17:06:10 -- host/auth.sh@44 -- # keyid=0 00:18:54.739 17:06:10 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:18:54.739 17:06:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.739 17:06:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:54.739 17:06:10 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:18:54.739 17:06:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:18:54.739 17:06:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.739 17:06:10 -- host/auth.sh@68 -- # digest=sha256 00:18:54.739 17:06:10 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:54.739 17:06:10 -- host/auth.sh@68 -- # keyid=0 00:18:54.739 17:06:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.739 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.739 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.739 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.739 17:06:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.739 17:06:10 -- nvmf/common.sh@717 -- # local ip 00:18:54.739 17:06:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.739 17:06:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.739 17:06:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.739 17:06:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.739 17:06:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.739 17:06:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.739 17:06:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.739 17:06:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.739 17:06:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:54.739 17:06:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:54.739 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.739 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.739 nvme0n1 00:18:54.739 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.739 17:06:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.739 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.739 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.739 17:06:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.739 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.739 17:06:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.739 17:06:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.739 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.739 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.739 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.739 17:06:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.739 17:06:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:54.739 17:06:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.739 17:06:10 -- host/auth.sh@44 -- # digest=sha256 00:18:54.739 17:06:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:54.739 17:06:10 -- host/auth.sh@44 -- # keyid=1 00:18:54.739 17:06:10 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:18:54.739 17:06:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.739 17:06:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:54.739 17:06:10 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:18:54.739 17:06:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:18:54.739 17:06:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.739 17:06:10 -- host/auth.sh@68 -- # digest=sha256 00:18:54.739 17:06:10 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:54.739 17:06:10 -- host/auth.sh@68 -- # keyid=1 00:18:54.739 17:06:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.739 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.739 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.739 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.739 17:06:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.739 17:06:10 -- nvmf/common.sh@717 -- # local ip 00:18:54.739 17:06:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.739 17:06:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.739 17:06:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.739 17:06:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.739 17:06:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.739 17:06:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.739 17:06:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.739 17:06:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.739 17:06:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:54.739 17:06:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:54.739 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.739 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.999 nvme0n1 00:18:54.999 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.999 17:06:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.999 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.999 17:06:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.999 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.999 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.999 17:06:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.999 17:06:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.999 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.999 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.999 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.999 17:06:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.999 17:06:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:54.999 17:06:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.999 17:06:10 -- host/auth.sh@44 -- # digest=sha256 00:18:54.999 17:06:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:54.999 17:06:10 -- host/auth.sh@44 -- # keyid=2 00:18:54.999 17:06:10 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:18:54.999 17:06:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.999 17:06:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:54.999 17:06:10 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:18:54.999 17:06:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:18:54.999 17:06:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.999 17:06:10 -- host/auth.sh@68 -- # digest=sha256 00:18:54.999 17:06:10 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:54.999 17:06:10 -- host/auth.sh@68 -- # keyid=2 00:18:54.999 17:06:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.999 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.999 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.999 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.999 17:06:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.999 17:06:10 -- nvmf/common.sh@717 -- # local ip 00:18:54.999 17:06:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.999 17:06:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.999 17:06:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.999 17:06:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.999 17:06:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.999 17:06:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.999 17:06:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.999 17:06:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.999 17:06:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:54.999 17:06:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:54.999 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.999 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:55.257 nvme0n1 00:18:55.257 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.257 17:06:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.257 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.257 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:55.257 17:06:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.257 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.257 17:06:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.257 17:06:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.257 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.257 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:55.257 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.257 17:06:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.257 17:06:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:55.257 17:06:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.257 17:06:10 -- host/auth.sh@44 -- # digest=sha256 00:18:55.257 17:06:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:55.257 17:06:10 -- host/auth.sh@44 -- # keyid=3 00:18:55.257 17:06:10 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:18:55.257 17:06:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.257 17:06:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:55.257 17:06:10 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:18:55.257 17:06:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:18:55.257 17:06:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.257 17:06:10 -- host/auth.sh@68 -- # digest=sha256 00:18:55.257 17:06:10 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:55.257 17:06:10 -- host/auth.sh@68 -- # keyid=3 00:18:55.257 17:06:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.257 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.257 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:55.257 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.257 17:06:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:55.257 17:06:10 -- nvmf/common.sh@717 -- # local ip 00:18:55.257 17:06:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:55.257 17:06:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:55.257 17:06:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.257 17:06:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.257 17:06:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:55.257 17:06:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.257 17:06:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:55.257 17:06:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:55.257 17:06:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:55.257 17:06:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:55.257 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.257 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:55.257 nvme0n1 00:18:55.257 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.257 17:06:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.257 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.257 17:06:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.257 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:55.257 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.516 17:06:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.516 17:06:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.516 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.516 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:55.516 17:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.516 17:06:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.516 17:06:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:55.516 17:06:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.516 17:06:10 -- host/auth.sh@44 -- # digest=sha256 00:18:55.516 17:06:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:55.516 17:06:10 -- host/auth.sh@44 -- # keyid=4 00:18:55.516 17:06:10 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:18:55.516 17:06:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.516 17:06:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:55.516 17:06:10 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:18:55.516 17:06:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:18:55.516 17:06:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.516 17:06:10 -- host/auth.sh@68 -- # digest=sha256 00:18:55.516 17:06:10 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:55.516 17:06:10 -- host/auth.sh@68 -- # keyid=4 00:18:55.516 17:06:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.516 17:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.516 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:55.516 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.516 17:06:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:55.516 17:06:11 -- nvmf/common.sh@717 -- # local ip 00:18:55.516 17:06:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:55.516 17:06:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:55.516 17:06:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.516 17:06:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.516 17:06:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:55.516 17:06:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.516 17:06:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:55.516 17:06:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:55.516 17:06:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:55.516 17:06:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:55.516 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.516 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.516 nvme0n1 00:18:55.516 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.516 17:06:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.516 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.516 17:06:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.516 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.516 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.516 17:06:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.516 17:06:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.516 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.516 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.516 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.516 17:06:11 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.516 17:06:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.516 17:06:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:55.516 17:06:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.516 17:06:11 -- host/auth.sh@44 -- # digest=sha256 00:18:55.516 17:06:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:55.516 17:06:11 -- host/auth.sh@44 -- # keyid=0 00:18:55.516 17:06:11 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:18:55.516 17:06:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.516 17:06:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:55.516 17:06:11 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:18:55.516 17:06:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:18:55.516 17:06:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.516 17:06:11 -- host/auth.sh@68 -- # digest=sha256 00:18:55.516 17:06:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:55.516 17:06:11 -- host/auth.sh@68 -- # keyid=0 00:18:55.516 17:06:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.516 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.516 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.516 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.516 17:06:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:55.516 17:06:11 -- nvmf/common.sh@717 -- # local ip 00:18:55.516 17:06:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:55.516 17:06:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:55.516 17:06:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.516 17:06:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.516 17:06:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:55.516 17:06:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.516 17:06:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:55.516 17:06:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:55.516 17:06:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:55.516 17:06:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:55.516 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.516 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.774 nvme0n1 00:18:55.774 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.774 17:06:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.774 17:06:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.774 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.774 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.774 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.774 17:06:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.774 17:06:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.774 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.774 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.774 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.774 17:06:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.774 17:06:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:55.774 17:06:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.774 17:06:11 -- host/auth.sh@44 -- # digest=sha256 00:18:55.774 17:06:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:55.774 17:06:11 -- host/auth.sh@44 -- # keyid=1 00:18:55.774 17:06:11 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:18:55.774 17:06:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.774 17:06:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:55.774 17:06:11 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:18:55.774 17:06:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:18:55.774 17:06:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.774 17:06:11 -- host/auth.sh@68 -- # digest=sha256 00:18:55.774 17:06:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:55.774 17:06:11 -- host/auth.sh@68 -- # keyid=1 00:18:55.774 17:06:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.774 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.774 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.774 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.774 17:06:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:55.774 17:06:11 -- nvmf/common.sh@717 -- # local ip 00:18:55.774 17:06:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:55.774 17:06:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:55.774 17:06:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.774 17:06:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.774 17:06:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:55.774 17:06:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.774 17:06:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:55.774 17:06:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:55.774 17:06:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:55.774 17:06:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:55.774 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.774 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.033 nvme0n1 00:18:56.033 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.033 17:06:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.033 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.033 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.033 17:06:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:56.033 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.033 17:06:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.033 17:06:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.034 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.034 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.034 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.034 17:06:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:56.034 17:06:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:56.034 17:06:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:56.034 17:06:11 -- host/auth.sh@44 -- # digest=sha256 00:18:56.034 17:06:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:56.034 17:06:11 -- host/auth.sh@44 -- # keyid=2 00:18:56.034 17:06:11 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:18:56.034 17:06:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:56.034 17:06:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:56.034 17:06:11 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:18:56.034 17:06:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:18:56.034 17:06:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:56.034 17:06:11 -- host/auth.sh@68 -- # digest=sha256 00:18:56.034 17:06:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:56.034 17:06:11 -- host/auth.sh@68 -- # keyid=2 00:18:56.034 17:06:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.034 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.034 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.034 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.034 17:06:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:56.034 17:06:11 -- nvmf/common.sh@717 -- # local ip 00:18:56.034 17:06:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:56.034 17:06:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:56.034 17:06:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.034 17:06:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.034 17:06:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:56.034 17:06:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.034 17:06:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:56.034 17:06:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:56.034 17:06:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:56.034 17:06:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:56.034 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.034 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.293 nvme0n1 00:18:56.293 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.293 17:06:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.293 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.293 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.293 17:06:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:56.293 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.293 17:06:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.293 17:06:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.293 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.293 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.293 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.293 17:06:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:56.293 17:06:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:56.293 17:06:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:56.293 17:06:11 -- host/auth.sh@44 -- # digest=sha256 00:18:56.293 17:06:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:56.293 17:06:11 -- host/auth.sh@44 -- # keyid=3 00:18:56.293 17:06:11 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:18:56.293 17:06:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:56.293 17:06:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:56.293 17:06:11 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:18:56.293 17:06:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:18:56.293 17:06:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:56.293 17:06:11 -- host/auth.sh@68 -- # digest=sha256 00:18:56.293 17:06:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:56.293 17:06:11 -- host/auth.sh@68 -- # keyid=3 00:18:56.293 17:06:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.293 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.293 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.293 17:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.293 17:06:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:56.293 17:06:11 -- nvmf/common.sh@717 -- # local ip 00:18:56.293 17:06:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:56.293 17:06:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:56.293 17:06:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.293 17:06:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.293 17:06:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:56.293 17:06:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.293 17:06:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:56.293 17:06:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:56.293 17:06:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:56.293 17:06:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:56.293 17:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.293 17:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:56.550 nvme0n1 00:18:56.550 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.550 17:06:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.550 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.550 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:56.550 17:06:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:56.550 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.550 17:06:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.550 17:06:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.550 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.550 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:56.550 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.550 17:06:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:56.550 17:06:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:56.550 17:06:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:56.550 17:06:12 -- host/auth.sh@44 -- # digest=sha256 00:18:56.550 17:06:12 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:56.550 17:06:12 -- host/auth.sh@44 -- # keyid=4 00:18:56.550 17:06:12 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:18:56.550 17:06:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:56.550 17:06:12 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:56.550 17:06:12 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:18:56.550 17:06:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:18:56.550 17:06:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:56.550 17:06:12 -- host/auth.sh@68 -- # digest=sha256 00:18:56.550 17:06:12 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:56.550 17:06:12 -- host/auth.sh@68 -- # keyid=4 00:18:56.550 17:06:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.551 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.551 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:56.551 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.551 17:06:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:56.551 17:06:12 -- nvmf/common.sh@717 -- # local ip 00:18:56.551 17:06:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:56.551 17:06:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:56.551 17:06:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.551 17:06:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.551 17:06:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:56.551 17:06:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.551 17:06:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:56.551 17:06:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:56.551 17:06:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:56.551 17:06:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:56.551 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.551 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:56.551 nvme0n1 00:18:56.551 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.551 17:06:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.551 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.551 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:56.551 17:06:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:56.551 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.809 17:06:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.809 17:06:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.809 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.809 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:56.809 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.809 17:06:12 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.809 17:06:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:56.809 17:06:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:56.809 17:06:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:56.809 17:06:12 -- host/auth.sh@44 -- # digest=sha256 00:18:56.809 17:06:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:56.809 17:06:12 -- host/auth.sh@44 -- # keyid=0 00:18:56.809 17:06:12 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:18:56.809 17:06:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:56.809 17:06:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:56.809 17:06:12 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:18:56.809 17:06:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:18:56.809 17:06:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:56.809 17:06:12 -- host/auth.sh@68 -- # digest=sha256 00:18:56.809 17:06:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:56.809 17:06:12 -- host/auth.sh@68 -- # keyid=0 00:18:56.809 17:06:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:56.809 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.809 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:56.809 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.809 17:06:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:56.809 17:06:12 -- nvmf/common.sh@717 -- # local ip 00:18:56.809 17:06:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:56.809 17:06:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:56.809 17:06:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.809 17:06:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.809 17:06:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:56.809 17:06:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.809 17:06:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:56.809 17:06:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:56.809 17:06:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:56.809 17:06:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:56.809 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.809 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:57.067 nvme0n1 00:18:57.067 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.067 17:06:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.067 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.067 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:57.067 17:06:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:57.067 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.067 17:06:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.067 17:06:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.067 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.067 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:57.067 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.067 17:06:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:57.067 17:06:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:57.067 17:06:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:57.067 17:06:12 -- host/auth.sh@44 -- # digest=sha256 00:18:57.067 17:06:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:57.067 17:06:12 -- host/auth.sh@44 -- # keyid=1 00:18:57.067 17:06:12 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:18:57.067 17:06:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:57.067 17:06:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:57.067 17:06:12 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:18:57.067 17:06:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:18:57.067 17:06:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:57.067 17:06:12 -- host/auth.sh@68 -- # digest=sha256 00:18:57.067 17:06:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:57.067 17:06:12 -- host/auth.sh@68 -- # keyid=1 00:18:57.067 17:06:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:57.067 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.067 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:57.067 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.067 17:06:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:57.067 17:06:12 -- nvmf/common.sh@717 -- # local ip 00:18:57.067 17:06:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:57.067 17:06:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:57.067 17:06:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.067 17:06:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.067 17:06:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:57.067 17:06:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.067 17:06:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:57.067 17:06:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:57.067 17:06:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:57.067 17:06:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:57.067 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.067 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:57.326 nvme0n1 00:18:57.326 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.326 17:06:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.326 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.326 17:06:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:57.326 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:57.326 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.326 17:06:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.326 17:06:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.326 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.326 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:57.326 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.326 17:06:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:57.326 17:06:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:57.326 17:06:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:57.326 17:06:12 -- host/auth.sh@44 -- # digest=sha256 00:18:57.326 17:06:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:57.326 17:06:12 -- host/auth.sh@44 -- # keyid=2 00:18:57.326 17:06:12 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:18:57.326 17:06:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:57.326 17:06:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:57.326 17:06:12 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:18:57.326 17:06:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:18:57.326 17:06:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:57.326 17:06:12 -- host/auth.sh@68 -- # digest=sha256 00:18:57.326 17:06:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:57.326 17:06:12 -- host/auth.sh@68 -- # keyid=2 00:18:57.326 17:06:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:57.326 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.326 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:57.326 17:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.326 17:06:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:57.326 17:06:12 -- nvmf/common.sh@717 -- # local ip 00:18:57.326 17:06:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:57.326 17:06:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:57.326 17:06:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.326 17:06:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.326 17:06:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:57.326 17:06:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.326 17:06:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:57.326 17:06:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:57.326 17:06:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:57.326 17:06:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:57.326 17:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.326 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:18:57.585 nvme0n1 00:18:57.585 17:06:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.585 17:06:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.585 17:06:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.585 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:57.585 17:06:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:57.585 17:06:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.585 17:06:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.585 17:06:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.585 17:06:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.585 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:57.585 17:06:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.585 17:06:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:57.585 17:06:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:57.585 17:06:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:57.585 17:06:13 -- host/auth.sh@44 -- # digest=sha256 00:18:57.585 17:06:13 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:57.585 17:06:13 -- host/auth.sh@44 -- # keyid=3 00:18:57.585 17:06:13 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:18:57.585 17:06:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:57.585 17:06:13 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:57.585 17:06:13 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:18:57.585 17:06:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:18:57.585 17:06:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:57.585 17:06:13 -- host/auth.sh@68 -- # digest=sha256 00:18:57.585 17:06:13 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:57.585 17:06:13 -- host/auth.sh@68 -- # keyid=3 00:18:57.585 17:06:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:57.585 17:06:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.585 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:57.585 17:06:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.585 17:06:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:57.585 17:06:13 -- nvmf/common.sh@717 -- # local ip 00:18:57.585 17:06:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:57.585 17:06:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:57.585 17:06:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.585 17:06:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.585 17:06:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:57.585 17:06:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.585 17:06:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:57.585 17:06:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:57.585 17:06:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:57.585 17:06:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:57.585 17:06:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.585 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:57.843 nvme0n1 00:18:57.843 17:06:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.843 17:06:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.843 17:06:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.843 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:57.843 17:06:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:57.843 17:06:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.102 17:06:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.102 17:06:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.102 17:06:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.102 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:58.102 17:06:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.102 17:06:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:58.102 17:06:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:58.102 17:06:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:58.102 17:06:13 -- host/auth.sh@44 -- # digest=sha256 00:18:58.102 17:06:13 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:58.102 17:06:13 -- host/auth.sh@44 -- # keyid=4 00:18:58.102 17:06:13 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:18:58.102 17:06:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:58.102 17:06:13 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:58.102 17:06:13 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:18:58.102 17:06:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:18:58.102 17:06:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:58.102 17:06:13 -- host/auth.sh@68 -- # digest=sha256 00:18:58.102 17:06:13 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:58.102 17:06:13 -- host/auth.sh@68 -- # keyid=4 00:18:58.102 17:06:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:58.102 17:06:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.102 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:58.102 17:06:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.102 17:06:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:58.102 17:06:13 -- nvmf/common.sh@717 -- # local ip 00:18:58.102 17:06:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:58.102 17:06:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:58.102 17:06:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.102 17:06:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.102 17:06:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:58.102 17:06:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.102 17:06:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:58.102 17:06:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:58.102 17:06:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:58.102 17:06:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:58.102 17:06:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.102 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:58.362 nvme0n1 00:18:58.362 17:06:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.362 17:06:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.362 17:06:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.362 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:58.362 17:06:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:58.362 17:06:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.362 17:06:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.362 17:06:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.362 17:06:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.362 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:58.362 17:06:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.362 17:06:13 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.362 17:06:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:58.362 17:06:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:58.362 17:06:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:58.362 17:06:13 -- host/auth.sh@44 -- # digest=sha256 00:18:58.362 17:06:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:58.362 17:06:13 -- host/auth.sh@44 -- # keyid=0 00:18:58.362 17:06:13 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:18:58.362 17:06:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:58.362 17:06:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:58.362 17:06:13 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:18:58.362 17:06:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:18:58.362 17:06:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:58.362 17:06:13 -- host/auth.sh@68 -- # digest=sha256 00:18:58.362 17:06:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:58.362 17:06:13 -- host/auth.sh@68 -- # keyid=0 00:18:58.362 17:06:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.362 17:06:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.362 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:58.362 17:06:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.362 17:06:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:58.362 17:06:13 -- nvmf/common.sh@717 -- # local ip 00:18:58.362 17:06:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:58.362 17:06:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:58.362 17:06:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.362 17:06:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.362 17:06:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:58.362 17:06:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.362 17:06:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:58.362 17:06:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:58.362 17:06:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:58.362 17:06:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:58.362 17:06:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.362 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:58.929 nvme0n1 00:18:58.929 17:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.929 17:06:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.929 17:06:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:58.929 17:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.929 17:06:14 -- common/autotest_common.sh@10 -- # set +x 00:18:58.929 17:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.929 17:06:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.929 17:06:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.929 17:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.929 17:06:14 -- common/autotest_common.sh@10 -- # set +x 00:18:58.929 17:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.929 17:06:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:58.929 17:06:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:58.929 17:06:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:58.929 17:06:14 -- host/auth.sh@44 -- # digest=sha256 00:18:58.929 17:06:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:58.929 17:06:14 -- host/auth.sh@44 -- # keyid=1 00:18:58.929 17:06:14 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:18:58.929 17:06:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:58.929 17:06:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:58.930 17:06:14 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:18:58.930 17:06:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:18:58.930 17:06:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:58.930 17:06:14 -- host/auth.sh@68 -- # digest=sha256 00:18:58.930 17:06:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:58.930 17:06:14 -- host/auth.sh@68 -- # keyid=1 00:18:58.930 17:06:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.930 17:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.930 17:06:14 -- common/autotest_common.sh@10 -- # set +x 00:18:58.930 17:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.930 17:06:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:58.930 17:06:14 -- nvmf/common.sh@717 -- # local ip 00:18:58.930 17:06:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:58.930 17:06:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:58.930 17:06:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.930 17:06:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.930 17:06:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:58.930 17:06:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.930 17:06:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:58.930 17:06:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:58.930 17:06:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:58.930 17:06:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:58.930 17:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.930 17:06:14 -- common/autotest_common.sh@10 -- # set +x 00:18:59.211 nvme0n1 00:18:59.211 17:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.211 17:06:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.211 17:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.211 17:06:14 -- common/autotest_common.sh@10 -- # set +x 00:18:59.547 17:06:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:59.547 17:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.547 17:06:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.547 17:06:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.547 17:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.547 17:06:14 -- common/autotest_common.sh@10 -- # set +x 00:18:59.547 17:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.547 17:06:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:59.547 17:06:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:59.547 17:06:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:59.547 17:06:14 -- host/auth.sh@44 -- # digest=sha256 00:18:59.547 17:06:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:59.547 17:06:14 -- host/auth.sh@44 -- # keyid=2 00:18:59.547 17:06:14 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:18:59.547 17:06:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:59.547 17:06:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:59.547 17:06:14 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:18:59.547 17:06:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:18:59.547 17:06:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:59.547 17:06:14 -- host/auth.sh@68 -- # digest=sha256 00:18:59.547 17:06:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:59.547 17:06:14 -- host/auth.sh@68 -- # keyid=2 00:18:59.547 17:06:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.547 17:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.547 17:06:14 -- common/autotest_common.sh@10 -- # set +x 00:18:59.547 17:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.547 17:06:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:59.547 17:06:14 -- nvmf/common.sh@717 -- # local ip 00:18:59.547 17:06:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:59.547 17:06:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:59.547 17:06:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.547 17:06:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.547 17:06:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:59.547 17:06:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.547 17:06:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:59.547 17:06:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:59.547 17:06:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:59.547 17:06:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:59.547 17:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.547 17:06:14 -- common/autotest_common.sh@10 -- # set +x 00:18:59.806 nvme0n1 00:18:59.806 17:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.806 17:06:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.806 17:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.806 17:06:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.806 17:06:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:59.806 17:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.064 17:06:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.064 17:06:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.064 17:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.064 17:06:15 -- common/autotest_common.sh@10 -- # set +x 00:19:00.064 17:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.064 17:06:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:00.064 17:06:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:00.064 17:06:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:00.064 17:06:15 -- host/auth.sh@44 -- # digest=sha256 00:19:00.064 17:06:15 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:00.064 17:06:15 -- host/auth.sh@44 -- # keyid=3 00:19:00.064 17:06:15 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:00.064 17:06:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:00.064 17:06:15 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:00.064 17:06:15 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:00.064 17:06:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:19:00.064 17:06:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:00.064 17:06:15 -- host/auth.sh@68 -- # digest=sha256 00:19:00.064 17:06:15 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:00.064 17:06:15 -- host/auth.sh@68 -- # keyid=3 00:19:00.064 17:06:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:00.064 17:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.064 17:06:15 -- common/autotest_common.sh@10 -- # set +x 00:19:00.064 17:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.064 17:06:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:00.064 17:06:15 -- nvmf/common.sh@717 -- # local ip 00:19:00.064 17:06:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:00.064 17:06:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:00.064 17:06:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.064 17:06:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.064 17:06:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:00.064 17:06:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.064 17:06:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:00.064 17:06:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:00.064 17:06:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:00.064 17:06:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:00.064 17:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.064 17:06:15 -- common/autotest_common.sh@10 -- # set +x 00:19:00.629 nvme0n1 00:19:00.629 17:06:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.629 17:06:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.629 17:06:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:00.629 17:06:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.629 17:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:00.629 17:06:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.629 17:06:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.629 17:06:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.629 17:06:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.629 17:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:00.629 17:06:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.629 17:06:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:00.629 17:06:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:00.629 17:06:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:00.629 17:06:16 -- host/auth.sh@44 -- # digest=sha256 00:19:00.629 17:06:16 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:00.629 17:06:16 -- host/auth.sh@44 -- # keyid=4 00:19:00.629 17:06:16 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:00.629 17:06:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:00.629 17:06:16 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:00.629 17:06:16 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:00.629 17:06:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:19:00.629 17:06:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:00.629 17:06:16 -- host/auth.sh@68 -- # digest=sha256 00:19:00.629 17:06:16 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:00.629 17:06:16 -- host/auth.sh@68 -- # keyid=4 00:19:00.629 17:06:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:00.629 17:06:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.629 17:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:00.629 17:06:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.629 17:06:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:00.629 17:06:16 -- nvmf/common.sh@717 -- # local ip 00:19:00.629 17:06:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:00.629 17:06:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:00.629 17:06:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.629 17:06:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.629 17:06:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:00.629 17:06:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.629 17:06:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:00.629 17:06:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:00.629 17:06:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:00.629 17:06:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:00.629 17:06:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.629 17:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:01.194 nvme0n1 00:19:01.194 17:06:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.194 17:06:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.194 17:06:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.194 17:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:01.194 17:06:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:01.194 17:06:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.194 17:06:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.194 17:06:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.194 17:06:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.194 17:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:01.194 17:06:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.194 17:06:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.194 17:06:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:01.194 17:06:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:01.194 17:06:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:01.194 17:06:16 -- host/auth.sh@44 -- # digest=sha256 00:19:01.194 17:06:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:01.194 17:06:16 -- host/auth.sh@44 -- # keyid=0 00:19:01.194 17:06:16 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:01.194 17:06:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:01.194 17:06:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:01.194 17:06:16 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:01.194 17:06:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:19:01.194 17:06:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:01.194 17:06:16 -- host/auth.sh@68 -- # digest=sha256 00:19:01.194 17:06:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:01.194 17:06:16 -- host/auth.sh@68 -- # keyid=0 00:19:01.194 17:06:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:01.194 17:06:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.194 17:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:01.194 17:06:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.194 17:06:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:01.194 17:06:16 -- nvmf/common.sh@717 -- # local ip 00:19:01.194 17:06:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:01.194 17:06:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:01.194 17:06:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.194 17:06:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.194 17:06:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:01.194 17:06:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.194 17:06:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:01.194 17:06:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:01.194 17:06:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:01.194 17:06:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:01.194 17:06:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.194 17:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:02.132 nvme0n1 00:19:02.132 17:06:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.132 17:06:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.132 17:06:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:02.132 17:06:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.132 17:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:02.132 17:06:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.132 17:06:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.132 17:06:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.132 17:06:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.132 17:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:02.132 17:06:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.132 17:06:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:02.132 17:06:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:02.132 17:06:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:02.132 17:06:17 -- host/auth.sh@44 -- # digest=sha256 00:19:02.132 17:06:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:02.132 17:06:17 -- host/auth.sh@44 -- # keyid=1 00:19:02.132 17:06:17 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:02.132 17:06:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:02.132 17:06:17 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:02.132 17:06:17 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:02.132 17:06:17 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:19:02.132 17:06:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:02.132 17:06:17 -- host/auth.sh@68 -- # digest=sha256 00:19:02.132 17:06:17 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:02.132 17:06:17 -- host/auth.sh@68 -- # keyid=1 00:19:02.132 17:06:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:02.132 17:06:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.132 17:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:02.132 17:06:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.132 17:06:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:02.132 17:06:17 -- nvmf/common.sh@717 -- # local ip 00:19:02.132 17:06:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:02.132 17:06:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:02.132 17:06:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.132 17:06:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.132 17:06:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:02.132 17:06:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.132 17:06:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:02.132 17:06:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:02.132 17:06:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:02.132 17:06:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:02.132 17:06:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.132 17:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:03.069 nvme0n1 00:19:03.069 17:06:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.069 17:06:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.069 17:06:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.069 17:06:18 -- common/autotest_common.sh@10 -- # set +x 00:19:03.069 17:06:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:03.069 17:06:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.069 17:06:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.069 17:06:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.069 17:06:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.069 17:06:18 -- common/autotest_common.sh@10 -- # set +x 00:19:03.069 17:06:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.069 17:06:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:03.069 17:06:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:03.069 17:06:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:03.069 17:06:18 -- host/auth.sh@44 -- # digest=sha256 00:19:03.069 17:06:18 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:03.069 17:06:18 -- host/auth.sh@44 -- # keyid=2 00:19:03.069 17:06:18 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:03.069 17:06:18 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:03.069 17:06:18 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:03.069 17:06:18 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:03.069 17:06:18 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:19:03.069 17:06:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:03.069 17:06:18 -- host/auth.sh@68 -- # digest=sha256 00:19:03.069 17:06:18 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:03.069 17:06:18 -- host/auth.sh@68 -- # keyid=2 00:19:03.069 17:06:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.069 17:06:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.069 17:06:18 -- common/autotest_common.sh@10 -- # set +x 00:19:03.069 17:06:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.069 17:06:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:03.069 17:06:18 -- nvmf/common.sh@717 -- # local ip 00:19:03.069 17:06:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:03.069 17:06:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:03.069 17:06:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.069 17:06:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.069 17:06:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:03.069 17:06:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.069 17:06:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:03.069 17:06:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:03.069 17:06:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:03.069 17:06:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:03.069 17:06:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.069 17:06:18 -- common/autotest_common.sh@10 -- # set +x 00:19:04.003 nvme0n1 00:19:04.003 17:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.003 17:06:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.003 17:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.003 17:06:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:04.003 17:06:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.003 17:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.003 17:06:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.003 17:06:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.003 17:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.003 17:06:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.003 17:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.003 17:06:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:04.003 17:06:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:04.003 17:06:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:04.003 17:06:19 -- host/auth.sh@44 -- # digest=sha256 00:19:04.003 17:06:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:04.003 17:06:19 -- host/auth.sh@44 -- # keyid=3 00:19:04.003 17:06:19 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:04.003 17:06:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:04.003 17:06:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:04.003 17:06:19 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:04.003 17:06:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:19:04.003 17:06:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:04.003 17:06:19 -- host/auth.sh@68 -- # digest=sha256 00:19:04.003 17:06:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:04.003 17:06:19 -- host/auth.sh@68 -- # keyid=3 00:19:04.003 17:06:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:04.003 17:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.003 17:06:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.003 17:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.003 17:06:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:04.003 17:06:19 -- nvmf/common.sh@717 -- # local ip 00:19:04.003 17:06:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:04.003 17:06:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:04.003 17:06:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.003 17:06:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.003 17:06:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:04.003 17:06:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.003 17:06:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:04.003 17:06:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:04.003 17:06:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:04.003 17:06:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:04.003 17:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.003 17:06:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.941 nvme0n1 00:19:04.941 17:06:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.941 17:06:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.941 17:06:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.941 17:06:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.941 17:06:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:04.941 17:06:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.941 17:06:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.941 17:06:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.941 17:06:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.941 17:06:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.941 17:06:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.941 17:06:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:04.941 17:06:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:04.941 17:06:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:04.941 17:06:20 -- host/auth.sh@44 -- # digest=sha256 00:19:04.941 17:06:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:04.941 17:06:20 -- host/auth.sh@44 -- # keyid=4 00:19:04.941 17:06:20 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:04.941 17:06:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:04.941 17:06:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:04.941 17:06:20 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:04.941 17:06:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:19:04.941 17:06:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:04.941 17:06:20 -- host/auth.sh@68 -- # digest=sha256 00:19:04.941 17:06:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:04.941 17:06:20 -- host/auth.sh@68 -- # keyid=4 00:19:04.941 17:06:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:04.941 17:06:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.941 17:06:20 -- common/autotest_common.sh@10 -- # set +x 00:19:04.941 17:06:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.941 17:06:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:04.941 17:06:20 -- nvmf/common.sh@717 -- # local ip 00:19:04.941 17:06:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:04.941 17:06:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:04.941 17:06:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.941 17:06:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.941 17:06:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:04.941 17:06:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.941 17:06:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:04.941 17:06:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:04.941 17:06:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:04.941 17:06:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:04.941 17:06:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.941 17:06:20 -- common/autotest_common.sh@10 -- # set +x 00:19:05.880 nvme0n1 00:19:05.880 17:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.880 17:06:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.880 17:06:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:05.880 17:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.880 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:05.880 17:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.140 17:06:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.140 17:06:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.140 17:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.140 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.140 17:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.140 17:06:21 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:06.140 17:06:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.140 17:06:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.140 17:06:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:06.140 17:06:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.140 17:06:21 -- host/auth.sh@44 -- # digest=sha384 00:19:06.140 17:06:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:06.140 17:06:21 -- host/auth.sh@44 -- # keyid=0 00:19:06.140 17:06:21 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:06.140 17:06:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.140 17:06:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:06.140 17:06:21 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:06.140 17:06:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:19:06.140 17:06:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.140 17:06:21 -- host/auth.sh@68 -- # digest=sha384 00:19:06.140 17:06:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:06.140 17:06:21 -- host/auth.sh@68 -- # keyid=0 00:19:06.140 17:06:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:06.140 17:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.140 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.140 17:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.140 17:06:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.140 17:06:21 -- nvmf/common.sh@717 -- # local ip 00:19:06.140 17:06:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.140 17:06:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.140 17:06:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.140 17:06:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.140 17:06:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.140 17:06:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.140 17:06:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.140 17:06:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.140 17:06:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.140 17:06:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:06.140 17:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.140 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.140 nvme0n1 00:19:06.140 17:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.140 17:06:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.140 17:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.140 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.140 17:06:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.140 17:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.140 17:06:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.140 17:06:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.140 17:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.140 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.140 17:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.140 17:06:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.140 17:06:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:06.140 17:06:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.140 17:06:21 -- host/auth.sh@44 -- # digest=sha384 00:19:06.140 17:06:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:06.140 17:06:21 -- host/auth.sh@44 -- # keyid=1 00:19:06.140 17:06:21 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:06.140 17:06:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.140 17:06:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:06.140 17:06:21 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:06.140 17:06:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:19:06.140 17:06:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.140 17:06:21 -- host/auth.sh@68 -- # digest=sha384 00:19:06.140 17:06:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:06.140 17:06:21 -- host/auth.sh@68 -- # keyid=1 00:19:06.140 17:06:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:06.140 17:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.140 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.140 17:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.140 17:06:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.140 17:06:21 -- nvmf/common.sh@717 -- # local ip 00:19:06.140 17:06:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.140 17:06:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.140 17:06:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.140 17:06:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.140 17:06:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.140 17:06:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.140 17:06:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.140 17:06:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.140 17:06:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.140 17:06:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:06.140 17:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.140 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.400 nvme0n1 00:19:06.400 17:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.400 17:06:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.400 17:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.400 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.400 17:06:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.400 17:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.400 17:06:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.400 17:06:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.400 17:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.400 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.400 17:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.400 17:06:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.400 17:06:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:06.400 17:06:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.400 17:06:21 -- host/auth.sh@44 -- # digest=sha384 00:19:06.400 17:06:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:06.400 17:06:21 -- host/auth.sh@44 -- # keyid=2 00:19:06.400 17:06:21 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:06.400 17:06:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.400 17:06:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:06.400 17:06:21 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:06.400 17:06:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:19:06.400 17:06:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.400 17:06:21 -- host/auth.sh@68 -- # digest=sha384 00:19:06.400 17:06:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:06.400 17:06:21 -- host/auth.sh@68 -- # keyid=2 00:19:06.400 17:06:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:06.400 17:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.400 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.400 17:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.400 17:06:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.400 17:06:21 -- nvmf/common.sh@717 -- # local ip 00:19:06.400 17:06:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.400 17:06:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.400 17:06:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.400 17:06:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.400 17:06:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.400 17:06:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.400 17:06:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.400 17:06:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.400 17:06:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.400 17:06:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:06.400 17:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.400 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:06.400 nvme0n1 00:19:06.400 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.400 17:06:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.400 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.400 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:06.400 17:06:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.657 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.657 17:06:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.657 17:06:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.657 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.657 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:06.657 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.657 17:06:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.657 17:06:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:06.657 17:06:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.657 17:06:22 -- host/auth.sh@44 -- # digest=sha384 00:19:06.657 17:06:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:06.657 17:06:22 -- host/auth.sh@44 -- # keyid=3 00:19:06.658 17:06:22 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:06.658 17:06:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.658 17:06:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:06.658 17:06:22 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:06.658 17:06:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:19:06.658 17:06:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.658 17:06:22 -- host/auth.sh@68 -- # digest=sha384 00:19:06.658 17:06:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:06.658 17:06:22 -- host/auth.sh@68 -- # keyid=3 00:19:06.658 17:06:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:06.658 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.658 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:06.658 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.658 17:06:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.658 17:06:22 -- nvmf/common.sh@717 -- # local ip 00:19:06.658 17:06:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.658 17:06:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.658 17:06:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.658 17:06:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.658 17:06:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.658 17:06:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.658 17:06:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.658 17:06:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.658 17:06:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.658 17:06:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:06.658 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.658 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:06.658 nvme0n1 00:19:06.658 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.658 17:06:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.658 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.658 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:06.658 17:06:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.658 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.658 17:06:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.658 17:06:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.658 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.658 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:06.658 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.658 17:06:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.658 17:06:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:06.658 17:06:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.658 17:06:22 -- host/auth.sh@44 -- # digest=sha384 00:19:06.658 17:06:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:06.658 17:06:22 -- host/auth.sh@44 -- # keyid=4 00:19:06.658 17:06:22 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:06.658 17:06:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.658 17:06:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:06.658 17:06:22 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:06.658 17:06:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:19:06.658 17:06:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.658 17:06:22 -- host/auth.sh@68 -- # digest=sha384 00:19:06.658 17:06:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:06.658 17:06:22 -- host/auth.sh@68 -- # keyid=4 00:19:06.658 17:06:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:06.658 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.658 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:06.658 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.658 17:06:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.658 17:06:22 -- nvmf/common.sh@717 -- # local ip 00:19:06.658 17:06:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.658 17:06:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.658 17:06:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.658 17:06:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.658 17:06:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.658 17:06:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.658 17:06:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.658 17:06:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.658 17:06:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.658 17:06:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:06.658 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.658 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:06.917 nvme0n1 00:19:06.917 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.917 17:06:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.917 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.917 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:06.917 17:06:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.917 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.917 17:06:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.917 17:06:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.917 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.917 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:06.917 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.917 17:06:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.917 17:06:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.917 17:06:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:06.917 17:06:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.917 17:06:22 -- host/auth.sh@44 -- # digest=sha384 00:19:06.917 17:06:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:06.917 17:06:22 -- host/auth.sh@44 -- # keyid=0 00:19:06.917 17:06:22 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:06.917 17:06:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.917 17:06:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:06.917 17:06:22 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:06.917 17:06:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:19:06.917 17:06:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.917 17:06:22 -- host/auth.sh@68 -- # digest=sha384 00:19:06.917 17:06:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:06.917 17:06:22 -- host/auth.sh@68 -- # keyid=0 00:19:06.917 17:06:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.917 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.917 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:06.917 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.917 17:06:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.917 17:06:22 -- nvmf/common.sh@717 -- # local ip 00:19:06.917 17:06:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.917 17:06:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.917 17:06:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.917 17:06:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.917 17:06:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.917 17:06:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.917 17:06:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.917 17:06:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.917 17:06:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.917 17:06:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:06.917 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.917 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:07.176 nvme0n1 00:19:07.176 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.176 17:06:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.176 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.176 17:06:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:07.176 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:07.176 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.176 17:06:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.176 17:06:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.176 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.176 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:07.176 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.176 17:06:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.176 17:06:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:07.176 17:06:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.176 17:06:22 -- host/auth.sh@44 -- # digest=sha384 00:19:07.176 17:06:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:07.176 17:06:22 -- host/auth.sh@44 -- # keyid=1 00:19:07.176 17:06:22 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:07.176 17:06:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.176 17:06:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:07.176 17:06:22 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:07.176 17:06:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:19:07.176 17:06:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.176 17:06:22 -- host/auth.sh@68 -- # digest=sha384 00:19:07.176 17:06:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:07.176 17:06:22 -- host/auth.sh@68 -- # keyid=1 00:19:07.176 17:06:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:07.176 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.176 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:07.176 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.176 17:06:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.176 17:06:22 -- nvmf/common.sh@717 -- # local ip 00:19:07.176 17:06:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.176 17:06:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.176 17:06:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.176 17:06:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.176 17:06:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.176 17:06:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.176 17:06:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.176 17:06:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.176 17:06:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.176 17:06:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:07.176 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.176 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:07.433 nvme0n1 00:19:07.433 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.433 17:06:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.433 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.433 17:06:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:07.433 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:07.433 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.433 17:06:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.433 17:06:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.433 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.433 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:07.433 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.433 17:06:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.433 17:06:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:07.433 17:06:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.433 17:06:22 -- host/auth.sh@44 -- # digest=sha384 00:19:07.433 17:06:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:07.433 17:06:22 -- host/auth.sh@44 -- # keyid=2 00:19:07.433 17:06:22 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:07.433 17:06:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.433 17:06:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:07.433 17:06:22 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:07.433 17:06:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:19:07.433 17:06:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.433 17:06:22 -- host/auth.sh@68 -- # digest=sha384 00:19:07.433 17:06:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:07.433 17:06:22 -- host/auth.sh@68 -- # keyid=2 00:19:07.433 17:06:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:07.433 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.433 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:07.433 17:06:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.433 17:06:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.433 17:06:22 -- nvmf/common.sh@717 -- # local ip 00:19:07.433 17:06:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.433 17:06:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.434 17:06:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.434 17:06:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.434 17:06:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.434 17:06:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.434 17:06:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.434 17:06:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.434 17:06:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.434 17:06:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:07.434 17:06:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.434 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:07.692 nvme0n1 00:19:07.692 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.692 17:06:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.692 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.692 17:06:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:07.692 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.692 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.692 17:06:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.692 17:06:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.692 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.692 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.692 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.692 17:06:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.692 17:06:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:07.692 17:06:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.692 17:06:23 -- host/auth.sh@44 -- # digest=sha384 00:19:07.692 17:06:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:07.692 17:06:23 -- host/auth.sh@44 -- # keyid=3 00:19:07.692 17:06:23 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:07.692 17:06:23 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.692 17:06:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:07.692 17:06:23 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:07.692 17:06:23 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:19:07.692 17:06:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.692 17:06:23 -- host/auth.sh@68 -- # digest=sha384 00:19:07.692 17:06:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:07.692 17:06:23 -- host/auth.sh@68 -- # keyid=3 00:19:07.692 17:06:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:07.692 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.692 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.692 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.692 17:06:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.692 17:06:23 -- nvmf/common.sh@717 -- # local ip 00:19:07.692 17:06:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.692 17:06:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.692 17:06:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.692 17:06:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.692 17:06:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.692 17:06:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.692 17:06:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.692 17:06:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.692 17:06:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.692 17:06:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:07.692 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.692 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.692 nvme0n1 00:19:07.692 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.692 17:06:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.692 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.692 17:06:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:07.692 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.692 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.692 17:06:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.692 17:06:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.692 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.692 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.951 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.951 17:06:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.951 17:06:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:07.951 17:06:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.951 17:06:23 -- host/auth.sh@44 -- # digest=sha384 00:19:07.951 17:06:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:07.951 17:06:23 -- host/auth.sh@44 -- # keyid=4 00:19:07.951 17:06:23 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:07.951 17:06:23 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.951 17:06:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:07.951 17:06:23 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:07.951 17:06:23 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:19:07.951 17:06:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.951 17:06:23 -- host/auth.sh@68 -- # digest=sha384 00:19:07.951 17:06:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:07.951 17:06:23 -- host/auth.sh@68 -- # keyid=4 00:19:07.951 17:06:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:07.951 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.951 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.951 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.951 17:06:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.951 17:06:23 -- nvmf/common.sh@717 -- # local ip 00:19:07.951 17:06:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.951 17:06:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.951 17:06:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.951 17:06:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.951 17:06:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.951 17:06:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.951 17:06:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.951 17:06:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.951 17:06:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.951 17:06:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:07.951 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.951 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.951 nvme0n1 00:19:07.951 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.951 17:06:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.951 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.951 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.951 17:06:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:07.951 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.951 17:06:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.951 17:06:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.951 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.951 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.951 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.951 17:06:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.951 17:06:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.951 17:06:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:07.951 17:06:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.951 17:06:23 -- host/auth.sh@44 -- # digest=sha384 00:19:07.951 17:06:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:07.951 17:06:23 -- host/auth.sh@44 -- # keyid=0 00:19:07.951 17:06:23 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:07.951 17:06:23 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.951 17:06:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:07.951 17:06:23 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:07.951 17:06:23 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:19:07.951 17:06:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.951 17:06:23 -- host/auth.sh@68 -- # digest=sha384 00:19:07.951 17:06:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:07.951 17:06:23 -- host/auth.sh@68 -- # keyid=0 00:19:07.951 17:06:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.951 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.951 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.951 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.951 17:06:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.951 17:06:23 -- nvmf/common.sh@717 -- # local ip 00:19:07.951 17:06:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.951 17:06:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.951 17:06:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.951 17:06:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.951 17:06:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.951 17:06:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.951 17:06:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.951 17:06:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.951 17:06:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.951 17:06:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:07.951 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.951 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:08.209 nvme0n1 00:19:08.209 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.209 17:06:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.209 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.209 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:08.209 17:06:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:08.209 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.468 17:06:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.468 17:06:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.468 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.468 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:08.468 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.468 17:06:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:08.468 17:06:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:08.468 17:06:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:08.468 17:06:23 -- host/auth.sh@44 -- # digest=sha384 00:19:08.468 17:06:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:08.468 17:06:23 -- host/auth.sh@44 -- # keyid=1 00:19:08.468 17:06:23 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:08.468 17:06:23 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:08.468 17:06:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:08.468 17:06:23 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:08.468 17:06:23 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:19:08.468 17:06:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:08.468 17:06:23 -- host/auth.sh@68 -- # digest=sha384 00:19:08.468 17:06:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:08.468 17:06:23 -- host/auth.sh@68 -- # keyid=1 00:19:08.468 17:06:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.468 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.468 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:08.468 17:06:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.468 17:06:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:08.469 17:06:23 -- nvmf/common.sh@717 -- # local ip 00:19:08.469 17:06:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:08.469 17:06:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:08.469 17:06:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.469 17:06:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.469 17:06:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:08.469 17:06:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.469 17:06:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:08.469 17:06:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:08.469 17:06:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:08.469 17:06:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:08.469 17:06:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.469 17:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:08.727 nvme0n1 00:19:08.727 17:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.727 17:06:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.727 17:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.727 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:08.727 17:06:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:08.727 17:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.727 17:06:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.727 17:06:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.727 17:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.727 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:08.727 17:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.727 17:06:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:08.727 17:06:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:08.727 17:06:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:08.727 17:06:24 -- host/auth.sh@44 -- # digest=sha384 00:19:08.727 17:06:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:08.727 17:06:24 -- host/auth.sh@44 -- # keyid=2 00:19:08.727 17:06:24 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:08.727 17:06:24 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:08.727 17:06:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:08.727 17:06:24 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:08.727 17:06:24 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:19:08.727 17:06:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:08.727 17:06:24 -- host/auth.sh@68 -- # digest=sha384 00:19:08.727 17:06:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:08.727 17:06:24 -- host/auth.sh@68 -- # keyid=2 00:19:08.727 17:06:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.727 17:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.727 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:08.727 17:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.727 17:06:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:08.727 17:06:24 -- nvmf/common.sh@717 -- # local ip 00:19:08.727 17:06:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:08.727 17:06:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:08.727 17:06:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.727 17:06:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.727 17:06:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:08.727 17:06:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.727 17:06:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:08.727 17:06:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:08.727 17:06:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:08.727 17:06:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:08.727 17:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.727 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:08.985 nvme0n1 00:19:08.985 17:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.985 17:06:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.985 17:06:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:08.985 17:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.985 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:08.985 17:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.985 17:06:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.985 17:06:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.985 17:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.985 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:08.985 17:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.986 17:06:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:08.986 17:06:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:08.986 17:06:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:08.986 17:06:24 -- host/auth.sh@44 -- # digest=sha384 00:19:08.986 17:06:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:08.986 17:06:24 -- host/auth.sh@44 -- # keyid=3 00:19:08.986 17:06:24 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:08.986 17:06:24 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:08.986 17:06:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:08.986 17:06:24 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:08.986 17:06:24 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:19:08.986 17:06:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:08.986 17:06:24 -- host/auth.sh@68 -- # digest=sha384 00:19:08.986 17:06:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:08.986 17:06:24 -- host/auth.sh@68 -- # keyid=3 00:19:08.986 17:06:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.986 17:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.986 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:08.986 17:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.986 17:06:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:08.986 17:06:24 -- nvmf/common.sh@717 -- # local ip 00:19:08.986 17:06:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:08.986 17:06:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:08.986 17:06:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.986 17:06:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.986 17:06:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:08.986 17:06:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.986 17:06:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:08.986 17:06:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:08.986 17:06:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:08.986 17:06:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:08.986 17:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.986 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:09.246 nvme0n1 00:19:09.246 17:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.246 17:06:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.246 17:06:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:09.246 17:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.246 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:09.246 17:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.246 17:06:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.246 17:06:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.246 17:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.246 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:09.246 17:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.246 17:06:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:09.246 17:06:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:09.246 17:06:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:09.246 17:06:24 -- host/auth.sh@44 -- # digest=sha384 00:19:09.246 17:06:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:09.246 17:06:24 -- host/auth.sh@44 -- # keyid=4 00:19:09.246 17:06:24 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:09.246 17:06:24 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:09.246 17:06:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:09.246 17:06:24 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:09.246 17:06:24 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:19:09.246 17:06:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:09.246 17:06:24 -- host/auth.sh@68 -- # digest=sha384 00:19:09.246 17:06:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:09.246 17:06:24 -- host/auth.sh@68 -- # keyid=4 00:19:09.246 17:06:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:09.246 17:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.246 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:09.246 17:06:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.246 17:06:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:09.246 17:06:24 -- nvmf/common.sh@717 -- # local ip 00:19:09.246 17:06:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:09.246 17:06:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:09.246 17:06:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.246 17:06:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.246 17:06:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:09.246 17:06:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.246 17:06:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:09.246 17:06:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:09.246 17:06:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:09.246 17:06:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:09.246 17:06:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.246 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:09.507 nvme0n1 00:19:09.507 17:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.507 17:06:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.507 17:06:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:09.507 17:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.507 17:06:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.507 17:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.507 17:06:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.507 17:06:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.507 17:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.507 17:06:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.507 17:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.507 17:06:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.507 17:06:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:09.507 17:06:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:09.507 17:06:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:09.507 17:06:25 -- host/auth.sh@44 -- # digest=sha384 00:19:09.507 17:06:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:09.507 17:06:25 -- host/auth.sh@44 -- # keyid=0 00:19:09.507 17:06:25 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:09.507 17:06:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:09.507 17:06:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:09.507 17:06:25 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:09.507 17:06:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:19:09.507 17:06:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:09.507 17:06:25 -- host/auth.sh@68 -- # digest=sha384 00:19:09.507 17:06:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:09.507 17:06:25 -- host/auth.sh@68 -- # keyid=0 00:19:09.507 17:06:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.507 17:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.507 17:06:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.507 17:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.507 17:06:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:09.507 17:06:25 -- nvmf/common.sh@717 -- # local ip 00:19:09.507 17:06:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:09.507 17:06:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:09.507 17:06:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.507 17:06:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.507 17:06:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:09.507 17:06:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.507 17:06:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:09.507 17:06:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:09.507 17:06:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:09.507 17:06:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:09.507 17:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.507 17:06:25 -- common/autotest_common.sh@10 -- # set +x 00:19:10.076 nvme0n1 00:19:10.076 17:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.076 17:06:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.076 17:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.076 17:06:25 -- common/autotest_common.sh@10 -- # set +x 00:19:10.076 17:06:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.076 17:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.076 17:06:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.076 17:06:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.076 17:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.076 17:06:25 -- common/autotest_common.sh@10 -- # set +x 00:19:10.076 17:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.076 17:06:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.076 17:06:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:10.076 17:06:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.076 17:06:25 -- host/auth.sh@44 -- # digest=sha384 00:19:10.076 17:06:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:10.076 17:06:25 -- host/auth.sh@44 -- # keyid=1 00:19:10.076 17:06:25 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:10.076 17:06:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:10.076 17:06:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:10.076 17:06:25 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:10.076 17:06:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:19:10.076 17:06:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.076 17:06:25 -- host/auth.sh@68 -- # digest=sha384 00:19:10.076 17:06:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:10.076 17:06:25 -- host/auth.sh@68 -- # keyid=1 00:19:10.076 17:06:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:10.076 17:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.076 17:06:25 -- common/autotest_common.sh@10 -- # set +x 00:19:10.076 17:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.076 17:06:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.076 17:06:25 -- nvmf/common.sh@717 -- # local ip 00:19:10.076 17:06:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.076 17:06:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.076 17:06:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.076 17:06:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.076 17:06:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.076 17:06:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.076 17:06:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.076 17:06:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.076 17:06:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.076 17:06:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:10.076 17:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.076 17:06:25 -- common/autotest_common.sh@10 -- # set +x 00:19:10.643 nvme0n1 00:19:10.643 17:06:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.643 17:06:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.643 17:06:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.643 17:06:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.643 17:06:26 -- common/autotest_common.sh@10 -- # set +x 00:19:10.643 17:06:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.643 17:06:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.643 17:06:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.643 17:06:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.643 17:06:26 -- common/autotest_common.sh@10 -- # set +x 00:19:10.643 17:06:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.643 17:06:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.643 17:06:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:10.643 17:06:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.643 17:06:26 -- host/auth.sh@44 -- # digest=sha384 00:19:10.643 17:06:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:10.643 17:06:26 -- host/auth.sh@44 -- # keyid=2 00:19:10.643 17:06:26 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:10.643 17:06:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:10.643 17:06:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:10.643 17:06:26 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:10.643 17:06:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:19:10.643 17:06:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.643 17:06:26 -- host/auth.sh@68 -- # digest=sha384 00:19:10.643 17:06:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:10.643 17:06:26 -- host/auth.sh@68 -- # keyid=2 00:19:10.643 17:06:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:10.643 17:06:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.643 17:06:26 -- common/autotest_common.sh@10 -- # set +x 00:19:10.643 17:06:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.643 17:06:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.643 17:06:26 -- nvmf/common.sh@717 -- # local ip 00:19:10.643 17:06:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.643 17:06:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.643 17:06:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.643 17:06:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.643 17:06:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.643 17:06:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.643 17:06:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.643 17:06:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.643 17:06:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.643 17:06:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:10.643 17:06:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.643 17:06:26 -- common/autotest_common.sh@10 -- # set +x 00:19:11.239 nvme0n1 00:19:11.239 17:06:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.239 17:06:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.239 17:06:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.239 17:06:26 -- common/autotest_common.sh@10 -- # set +x 00:19:11.239 17:06:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.239 17:06:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.239 17:06:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.239 17:06:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.239 17:06:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.239 17:06:26 -- common/autotest_common.sh@10 -- # set +x 00:19:11.239 17:06:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.239 17:06:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.239 17:06:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:11.239 17:06:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.239 17:06:26 -- host/auth.sh@44 -- # digest=sha384 00:19:11.239 17:06:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:11.239 17:06:26 -- host/auth.sh@44 -- # keyid=3 00:19:11.239 17:06:26 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:11.239 17:06:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:11.239 17:06:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:11.239 17:06:26 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:11.239 17:06:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:19:11.239 17:06:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.239 17:06:26 -- host/auth.sh@68 -- # digest=sha384 00:19:11.239 17:06:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:11.239 17:06:26 -- host/auth.sh@68 -- # keyid=3 00:19:11.239 17:06:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:11.239 17:06:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.239 17:06:26 -- common/autotest_common.sh@10 -- # set +x 00:19:11.239 17:06:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.239 17:06:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.239 17:06:26 -- nvmf/common.sh@717 -- # local ip 00:19:11.239 17:06:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.239 17:06:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.239 17:06:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.239 17:06:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.239 17:06:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.239 17:06:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.239 17:06:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.239 17:06:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.239 17:06:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.239 17:06:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:11.239 17:06:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.239 17:06:26 -- common/autotest_common.sh@10 -- # set +x 00:19:11.807 nvme0n1 00:19:11.807 17:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.807 17:06:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.807 17:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.807 17:06:27 -- common/autotest_common.sh@10 -- # set +x 00:19:11.807 17:06:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.807 17:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.807 17:06:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.807 17:06:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.807 17:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.807 17:06:27 -- common/autotest_common.sh@10 -- # set +x 00:19:11.807 17:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.807 17:06:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.807 17:06:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:11.807 17:06:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.807 17:06:27 -- host/auth.sh@44 -- # digest=sha384 00:19:11.807 17:06:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:11.807 17:06:27 -- host/auth.sh@44 -- # keyid=4 00:19:11.807 17:06:27 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:11.807 17:06:27 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:11.807 17:06:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:11.807 17:06:27 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:11.807 17:06:27 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:19:11.807 17:06:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.807 17:06:27 -- host/auth.sh@68 -- # digest=sha384 00:19:11.807 17:06:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:11.807 17:06:27 -- host/auth.sh@68 -- # keyid=4 00:19:11.807 17:06:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:11.807 17:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.807 17:06:27 -- common/autotest_common.sh@10 -- # set +x 00:19:11.807 17:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.807 17:06:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.807 17:06:27 -- nvmf/common.sh@717 -- # local ip 00:19:11.807 17:06:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.807 17:06:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.807 17:06:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.807 17:06:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.807 17:06:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.807 17:06:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.807 17:06:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.807 17:06:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.807 17:06:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.807 17:06:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:11.807 17:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.807 17:06:27 -- common/autotest_common.sh@10 -- # set +x 00:19:12.373 nvme0n1 00:19:12.373 17:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.373 17:06:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.373 17:06:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:12.373 17:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.373 17:06:27 -- common/autotest_common.sh@10 -- # set +x 00:19:12.373 17:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.373 17:06:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.373 17:06:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.373 17:06:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.373 17:06:28 -- common/autotest_common.sh@10 -- # set +x 00:19:12.373 17:06:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.373 17:06:28 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.373 17:06:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.374 17:06:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:12.374 17:06:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.374 17:06:28 -- host/auth.sh@44 -- # digest=sha384 00:19:12.374 17:06:28 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:12.374 17:06:28 -- host/auth.sh@44 -- # keyid=0 00:19:12.374 17:06:28 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:12.374 17:06:28 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:12.374 17:06:28 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:12.374 17:06:28 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:12.374 17:06:28 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:19:12.374 17:06:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.374 17:06:28 -- host/auth.sh@68 -- # digest=sha384 00:19:12.374 17:06:28 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:12.374 17:06:28 -- host/auth.sh@68 -- # keyid=0 00:19:12.374 17:06:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:12.374 17:06:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.374 17:06:28 -- common/autotest_common.sh@10 -- # set +x 00:19:12.374 17:06:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.374 17:06:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.374 17:06:28 -- nvmf/common.sh@717 -- # local ip 00:19:12.374 17:06:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.374 17:06:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.374 17:06:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.374 17:06:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.374 17:06:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:12.374 17:06:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.374 17:06:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:12.374 17:06:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:12.374 17:06:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:12.374 17:06:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:12.374 17:06:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.374 17:06:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.334 nvme0n1 00:19:13.334 17:06:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.334 17:06:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.334 17:06:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.334 17:06:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.334 17:06:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.334 17:06:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.334 17:06:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.334 17:06:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.334 17:06:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.334 17:06:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.334 17:06:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.334 17:06:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.334 17:06:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:13.334 17:06:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.334 17:06:28 -- host/auth.sh@44 -- # digest=sha384 00:19:13.334 17:06:28 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:13.335 17:06:28 -- host/auth.sh@44 -- # keyid=1 00:19:13.335 17:06:28 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:13.335 17:06:28 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:13.335 17:06:28 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:13.335 17:06:28 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:13.335 17:06:28 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:19:13.335 17:06:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.335 17:06:28 -- host/auth.sh@68 -- # digest=sha384 00:19:13.335 17:06:28 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:13.335 17:06:28 -- host/auth.sh@68 -- # keyid=1 00:19:13.335 17:06:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:13.335 17:06:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.335 17:06:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.335 17:06:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.335 17:06:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.335 17:06:29 -- nvmf/common.sh@717 -- # local ip 00:19:13.335 17:06:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.335 17:06:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.335 17:06:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.335 17:06:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.335 17:06:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:13.335 17:06:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.335 17:06:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:13.335 17:06:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:13.335 17:06:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:13.335 17:06:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:13.335 17:06:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.335 17:06:29 -- common/autotest_common.sh@10 -- # set +x 00:19:14.267 nvme0n1 00:19:14.267 17:06:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.267 17:06:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.267 17:06:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.267 17:06:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:14.267 17:06:29 -- common/autotest_common.sh@10 -- # set +x 00:19:14.267 17:06:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.267 17:06:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.267 17:06:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.267 17:06:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.267 17:06:29 -- common/autotest_common.sh@10 -- # set +x 00:19:14.526 17:06:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.526 17:06:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:14.526 17:06:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:14.526 17:06:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:14.526 17:06:29 -- host/auth.sh@44 -- # digest=sha384 00:19:14.526 17:06:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:14.526 17:06:29 -- host/auth.sh@44 -- # keyid=2 00:19:14.526 17:06:29 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:14.526 17:06:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:14.526 17:06:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:14.526 17:06:29 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:14.526 17:06:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:19:14.526 17:06:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:14.526 17:06:29 -- host/auth.sh@68 -- # digest=sha384 00:19:14.526 17:06:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:14.526 17:06:29 -- host/auth.sh@68 -- # keyid=2 00:19:14.526 17:06:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:14.526 17:06:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.526 17:06:29 -- common/autotest_common.sh@10 -- # set +x 00:19:14.526 17:06:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.526 17:06:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:14.526 17:06:29 -- nvmf/common.sh@717 -- # local ip 00:19:14.526 17:06:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:14.526 17:06:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:14.526 17:06:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.526 17:06:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.526 17:06:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:14.526 17:06:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.526 17:06:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:14.526 17:06:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:14.526 17:06:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:14.526 17:06:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:14.526 17:06:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.526 17:06:29 -- common/autotest_common.sh@10 -- # set +x 00:19:15.464 nvme0n1 00:19:15.464 17:06:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.464 17:06:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.464 17:06:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.464 17:06:30 -- common/autotest_common.sh@10 -- # set +x 00:19:15.464 17:06:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:15.464 17:06:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.464 17:06:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.464 17:06:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.464 17:06:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.464 17:06:30 -- common/autotest_common.sh@10 -- # set +x 00:19:15.464 17:06:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.464 17:06:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:15.464 17:06:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:15.464 17:06:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:15.464 17:06:30 -- host/auth.sh@44 -- # digest=sha384 00:19:15.464 17:06:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:15.464 17:06:30 -- host/auth.sh@44 -- # keyid=3 00:19:15.464 17:06:30 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:15.464 17:06:30 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:15.464 17:06:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:15.464 17:06:30 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:15.464 17:06:30 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:19:15.465 17:06:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:15.465 17:06:30 -- host/auth.sh@68 -- # digest=sha384 00:19:15.465 17:06:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:15.465 17:06:30 -- host/auth.sh@68 -- # keyid=3 00:19:15.465 17:06:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.465 17:06:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.465 17:06:30 -- common/autotest_common.sh@10 -- # set +x 00:19:15.465 17:06:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.465 17:06:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:15.465 17:06:30 -- nvmf/common.sh@717 -- # local ip 00:19:15.465 17:06:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:15.465 17:06:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:15.465 17:06:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.465 17:06:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.465 17:06:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:15.465 17:06:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.465 17:06:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:15.465 17:06:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:15.465 17:06:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:15.465 17:06:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:15.465 17:06:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.465 17:06:30 -- common/autotest_common.sh@10 -- # set +x 00:19:16.403 nvme0n1 00:19:16.403 17:06:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.403 17:06:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.403 17:06:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.403 17:06:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.403 17:06:31 -- common/autotest_common.sh@10 -- # set +x 00:19:16.403 17:06:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.403 17:06:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.403 17:06:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.403 17:06:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.403 17:06:31 -- common/autotest_common.sh@10 -- # set +x 00:19:16.404 17:06:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.404 17:06:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:16.404 17:06:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:16.404 17:06:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:16.404 17:06:31 -- host/auth.sh@44 -- # digest=sha384 00:19:16.404 17:06:31 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:16.404 17:06:31 -- host/auth.sh@44 -- # keyid=4 00:19:16.404 17:06:31 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:16.404 17:06:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:16.404 17:06:31 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:16.404 17:06:31 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:16.404 17:06:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:19:16.404 17:06:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:16.404 17:06:31 -- host/auth.sh@68 -- # digest=sha384 00:19:16.404 17:06:31 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:16.404 17:06:31 -- host/auth.sh@68 -- # keyid=4 00:19:16.404 17:06:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:16.404 17:06:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.404 17:06:31 -- common/autotest_common.sh@10 -- # set +x 00:19:16.404 17:06:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.404 17:06:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:16.404 17:06:31 -- nvmf/common.sh@717 -- # local ip 00:19:16.404 17:06:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:16.404 17:06:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:16.404 17:06:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.404 17:06:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.404 17:06:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:16.404 17:06:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.404 17:06:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:16.404 17:06:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:16.404 17:06:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:16.404 17:06:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.404 17:06:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.404 17:06:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.341 nvme0n1 00:19:17.341 17:06:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.341 17:06:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.341 17:06:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.341 17:06:32 -- common/autotest_common.sh@10 -- # set +x 00:19:17.341 17:06:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.341 17:06:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.341 17:06:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.341 17:06:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.341 17:06:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.341 17:06:32 -- common/autotest_common.sh@10 -- # set +x 00:19:17.341 17:06:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.341 17:06:32 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:17.341 17:06:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.341 17:06:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.341 17:06:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:17.341 17:06:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.341 17:06:32 -- host/auth.sh@44 -- # digest=sha512 00:19:17.341 17:06:32 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.341 17:06:32 -- host/auth.sh@44 -- # keyid=0 00:19:17.341 17:06:32 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:17.341 17:06:32 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:17.341 17:06:32 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:17.341 17:06:32 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:17.341 17:06:32 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:19:17.341 17:06:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.341 17:06:32 -- host/auth.sh@68 -- # digest=sha512 00:19:17.341 17:06:32 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:17.341 17:06:32 -- host/auth.sh@68 -- # keyid=0 00:19:17.341 17:06:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.341 17:06:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.341 17:06:32 -- common/autotest_common.sh@10 -- # set +x 00:19:17.341 17:06:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.341 17:06:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.341 17:06:32 -- nvmf/common.sh@717 -- # local ip 00:19:17.341 17:06:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.341 17:06:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.341 17:06:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.341 17:06:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.341 17:06:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:17.341 17:06:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.341 17:06:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:17.341 17:06:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:17.341 17:06:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:17.341 17:06:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:17.341 17:06:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.341 17:06:32 -- common/autotest_common.sh@10 -- # set +x 00:19:17.341 nvme0n1 00:19:17.341 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.341 17:06:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.341 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.341 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.341 17:06:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.341 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.600 17:06:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.601 17:06:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.601 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.601 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.601 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.601 17:06:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.601 17:06:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:17.601 17:06:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.601 17:06:33 -- host/auth.sh@44 -- # digest=sha512 00:19:17.601 17:06:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.601 17:06:33 -- host/auth.sh@44 -- # keyid=1 00:19:17.601 17:06:33 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:17.601 17:06:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:17.601 17:06:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:17.601 17:06:33 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:17.601 17:06:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:19:17.601 17:06:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.601 17:06:33 -- host/auth.sh@68 -- # digest=sha512 00:19:17.601 17:06:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:17.601 17:06:33 -- host/auth.sh@68 -- # keyid=1 00:19:17.601 17:06:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.601 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.601 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.601 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.601 17:06:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.601 17:06:33 -- nvmf/common.sh@717 -- # local ip 00:19:17.601 17:06:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.601 17:06:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.601 17:06:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.601 17:06:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.601 17:06:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:17.601 17:06:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.601 17:06:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:17.601 17:06:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:17.601 17:06:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:17.601 17:06:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:17.601 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.601 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.601 nvme0n1 00:19:17.601 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.601 17:06:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.601 17:06:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.601 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.601 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.601 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.601 17:06:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.601 17:06:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.601 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.601 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.601 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.601 17:06:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.601 17:06:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:17.601 17:06:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.601 17:06:33 -- host/auth.sh@44 -- # digest=sha512 00:19:17.601 17:06:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.601 17:06:33 -- host/auth.sh@44 -- # keyid=2 00:19:17.601 17:06:33 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:17.601 17:06:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:17.601 17:06:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:17.601 17:06:33 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:17.601 17:06:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:19:17.601 17:06:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.601 17:06:33 -- host/auth.sh@68 -- # digest=sha512 00:19:17.601 17:06:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:17.601 17:06:33 -- host/auth.sh@68 -- # keyid=2 00:19:17.601 17:06:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.601 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.601 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.601 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.601 17:06:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.601 17:06:33 -- nvmf/common.sh@717 -- # local ip 00:19:17.601 17:06:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.601 17:06:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.601 17:06:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.601 17:06:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.601 17:06:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:17.601 17:06:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.601 17:06:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:17.601 17:06:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:17.601 17:06:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:17.601 17:06:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:17.601 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.601 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.859 nvme0n1 00:19:17.859 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.859 17:06:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.860 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.860 17:06:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.860 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.860 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.860 17:06:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.860 17:06:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.860 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.860 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.860 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.860 17:06:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.860 17:06:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:17.860 17:06:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.860 17:06:33 -- host/auth.sh@44 -- # digest=sha512 00:19:17.860 17:06:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.860 17:06:33 -- host/auth.sh@44 -- # keyid=3 00:19:17.860 17:06:33 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:17.860 17:06:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:17.860 17:06:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:17.860 17:06:33 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:17.860 17:06:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:19:17.860 17:06:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.860 17:06:33 -- host/auth.sh@68 -- # digest=sha512 00:19:17.860 17:06:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:17.860 17:06:33 -- host/auth.sh@68 -- # keyid=3 00:19:17.860 17:06:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.860 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.860 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:17.860 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.860 17:06:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.860 17:06:33 -- nvmf/common.sh@717 -- # local ip 00:19:17.860 17:06:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.860 17:06:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.860 17:06:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.860 17:06:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.860 17:06:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:17.860 17:06:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.860 17:06:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:17.860 17:06:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:17.860 17:06:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:17.860 17:06:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:17.860 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.860 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:18.120 nvme0n1 00:19:18.120 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.120 17:06:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.120 17:06:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.120 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.120 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:18.120 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.120 17:06:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.120 17:06:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.120 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.120 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:18.120 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.120 17:06:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.120 17:06:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:18.120 17:06:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.120 17:06:33 -- host/auth.sh@44 -- # digest=sha512 00:19:18.120 17:06:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:18.120 17:06:33 -- host/auth.sh@44 -- # keyid=4 00:19:18.120 17:06:33 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:18.120 17:06:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.120 17:06:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:18.120 17:06:33 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:18.120 17:06:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:19:18.120 17:06:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.120 17:06:33 -- host/auth.sh@68 -- # digest=sha512 00:19:18.120 17:06:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:18.120 17:06:33 -- host/auth.sh@68 -- # keyid=4 00:19:18.120 17:06:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.120 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.120 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:18.120 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.120 17:06:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.120 17:06:33 -- nvmf/common.sh@717 -- # local ip 00:19:18.120 17:06:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.120 17:06:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.120 17:06:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.120 17:06:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.120 17:06:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.120 17:06:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.120 17:06:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.120 17:06:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.120 17:06:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.120 17:06:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:18.120 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.120 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:18.120 nvme0n1 00:19:18.120 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.120 17:06:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.120 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.120 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:18.120 17:06:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.120 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.120 17:06:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.120 17:06:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.120 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.120 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:18.120 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.120 17:06:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.120 17:06:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.120 17:06:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:18.120 17:06:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.120 17:06:33 -- host/auth.sh@44 -- # digest=sha512 00:19:18.120 17:06:33 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:18.120 17:06:33 -- host/auth.sh@44 -- # keyid=0 00:19:18.120 17:06:33 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:18.120 17:06:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.120 17:06:33 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:18.120 17:06:33 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:18.120 17:06:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:19:18.120 17:06:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.120 17:06:33 -- host/auth.sh@68 -- # digest=sha512 00:19:18.120 17:06:33 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:18.120 17:06:33 -- host/auth.sh@68 -- # keyid=0 00:19:18.121 17:06:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.121 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.121 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:18.121 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.381 17:06:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.381 17:06:33 -- nvmf/common.sh@717 -- # local ip 00:19:18.381 17:06:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.381 17:06:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.381 17:06:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.381 17:06:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.381 17:06:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.381 17:06:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.381 17:06:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.381 17:06:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.381 17:06:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.381 17:06:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:18.381 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.381 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:18.381 nvme0n1 00:19:18.381 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.381 17:06:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.381 17:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.381 17:06:33 -- common/autotest_common.sh@10 -- # set +x 00:19:18.381 17:06:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.381 17:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.381 17:06:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.381 17:06:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.381 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.381 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:18.381 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.381 17:06:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.381 17:06:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:18.381 17:06:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.381 17:06:34 -- host/auth.sh@44 -- # digest=sha512 00:19:18.381 17:06:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:18.381 17:06:34 -- host/auth.sh@44 -- # keyid=1 00:19:18.381 17:06:34 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:18.381 17:06:34 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.381 17:06:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:18.381 17:06:34 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:18.381 17:06:34 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:19:18.381 17:06:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.381 17:06:34 -- host/auth.sh@68 -- # digest=sha512 00:19:18.381 17:06:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:18.381 17:06:34 -- host/auth.sh@68 -- # keyid=1 00:19:18.381 17:06:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.381 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.381 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:18.381 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.381 17:06:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.381 17:06:34 -- nvmf/common.sh@717 -- # local ip 00:19:18.381 17:06:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.381 17:06:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.381 17:06:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.381 17:06:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.381 17:06:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.381 17:06:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.381 17:06:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.381 17:06:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.381 17:06:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.381 17:06:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:18.381 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.381 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:18.641 nvme0n1 00:19:18.641 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.641 17:06:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.641 17:06:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.641 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.641 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:18.641 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.641 17:06:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.641 17:06:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.641 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.641 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:18.641 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.641 17:06:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.641 17:06:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:18.641 17:06:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.641 17:06:34 -- host/auth.sh@44 -- # digest=sha512 00:19:18.641 17:06:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:18.641 17:06:34 -- host/auth.sh@44 -- # keyid=2 00:19:18.641 17:06:34 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:18.641 17:06:34 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.641 17:06:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:18.641 17:06:34 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:18.641 17:06:34 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:19:18.641 17:06:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.641 17:06:34 -- host/auth.sh@68 -- # digest=sha512 00:19:18.641 17:06:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:18.641 17:06:34 -- host/auth.sh@68 -- # keyid=2 00:19:18.641 17:06:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.641 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.641 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:18.641 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.641 17:06:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.641 17:06:34 -- nvmf/common.sh@717 -- # local ip 00:19:18.641 17:06:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.641 17:06:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.641 17:06:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.641 17:06:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.641 17:06:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.641 17:06:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.641 17:06:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.641 17:06:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.641 17:06:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.641 17:06:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:18.641 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.641 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:18.901 nvme0n1 00:19:18.901 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.901 17:06:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.901 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.901 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:18.901 17:06:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.901 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.901 17:06:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.901 17:06:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.901 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.901 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:18.901 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.901 17:06:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.901 17:06:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:18.901 17:06:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.901 17:06:34 -- host/auth.sh@44 -- # digest=sha512 00:19:18.901 17:06:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:18.901 17:06:34 -- host/auth.sh@44 -- # keyid=3 00:19:18.901 17:06:34 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:18.901 17:06:34 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.901 17:06:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:18.901 17:06:34 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:18.901 17:06:34 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:19:18.901 17:06:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.901 17:06:34 -- host/auth.sh@68 -- # digest=sha512 00:19:18.901 17:06:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:18.901 17:06:34 -- host/auth.sh@68 -- # keyid=3 00:19:18.901 17:06:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.901 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.901 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:18.901 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.901 17:06:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.901 17:06:34 -- nvmf/common.sh@717 -- # local ip 00:19:18.901 17:06:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.901 17:06:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.901 17:06:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.901 17:06:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.901 17:06:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.901 17:06:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.901 17:06:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.901 17:06:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.901 17:06:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.901 17:06:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:18.901 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.901 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.161 nvme0n1 00:19:19.161 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.161 17:06:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.161 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.161 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.161 17:06:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.161 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.161 17:06:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.161 17:06:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.161 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.161 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.161 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.161 17:06:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.161 17:06:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:19.161 17:06:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.161 17:06:34 -- host/auth.sh@44 -- # digest=sha512 00:19:19.161 17:06:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:19.161 17:06:34 -- host/auth.sh@44 -- # keyid=4 00:19:19.161 17:06:34 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:19.161 17:06:34 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.161 17:06:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:19.161 17:06:34 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:19.161 17:06:34 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:19:19.161 17:06:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.161 17:06:34 -- host/auth.sh@68 -- # digest=sha512 00:19:19.161 17:06:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:19.161 17:06:34 -- host/auth.sh@68 -- # keyid=4 00:19:19.161 17:06:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.161 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.161 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.161 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.161 17:06:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.161 17:06:34 -- nvmf/common.sh@717 -- # local ip 00:19:19.161 17:06:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.161 17:06:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.161 17:06:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.161 17:06:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.161 17:06:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.161 17:06:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.161 17:06:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.161 17:06:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.161 17:06:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.161 17:06:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:19.161 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.161 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.419 nvme0n1 00:19:19.419 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.419 17:06:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.419 17:06:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.419 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.419 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.419 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.419 17:06:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.419 17:06:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.419 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.419 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.419 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.419 17:06:34 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.419 17:06:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.419 17:06:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:19.419 17:06:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.419 17:06:34 -- host/auth.sh@44 -- # digest=sha512 00:19:19.419 17:06:34 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:19.419 17:06:34 -- host/auth.sh@44 -- # keyid=0 00:19:19.419 17:06:34 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:19.419 17:06:34 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.419 17:06:34 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:19.419 17:06:34 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:19.419 17:06:34 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:19:19.419 17:06:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.419 17:06:34 -- host/auth.sh@68 -- # digest=sha512 00:19:19.419 17:06:34 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:19.419 17:06:34 -- host/auth.sh@68 -- # keyid=0 00:19:19.419 17:06:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.419 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.419 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.420 17:06:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.420 17:06:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.420 17:06:34 -- nvmf/common.sh@717 -- # local ip 00:19:19.420 17:06:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.420 17:06:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.420 17:06:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.420 17:06:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.420 17:06:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.420 17:06:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.420 17:06:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.420 17:06:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.420 17:06:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.420 17:06:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:19.420 17:06:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.420 17:06:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.678 nvme0n1 00:19:19.678 17:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.678 17:06:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.678 17:06:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.678 17:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.678 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:19:19.678 17:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.678 17:06:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.678 17:06:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.678 17:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.678 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:19:19.678 17:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.678 17:06:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.678 17:06:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:19.678 17:06:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.678 17:06:35 -- host/auth.sh@44 -- # digest=sha512 00:19:19.678 17:06:35 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:19.678 17:06:35 -- host/auth.sh@44 -- # keyid=1 00:19:19.678 17:06:35 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:19.678 17:06:35 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.678 17:06:35 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:19.678 17:06:35 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:19.678 17:06:35 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:19:19.678 17:06:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.678 17:06:35 -- host/auth.sh@68 -- # digest=sha512 00:19:19.678 17:06:35 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:19.678 17:06:35 -- host/auth.sh@68 -- # keyid=1 00:19:19.678 17:06:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.678 17:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.678 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:19:19.678 17:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.678 17:06:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.678 17:06:35 -- nvmf/common.sh@717 -- # local ip 00:19:19.678 17:06:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.678 17:06:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.678 17:06:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.678 17:06:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.678 17:06:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.678 17:06:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.678 17:06:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.678 17:06:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.678 17:06:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.678 17:06:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:19.678 17:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.678 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:19:19.938 nvme0n1 00:19:19.938 17:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.938 17:06:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.938 17:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.938 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:19:19.938 17:06:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.938 17:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.938 17:06:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.938 17:06:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.938 17:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.938 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:19:19.938 17:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.938 17:06:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.938 17:06:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:19.938 17:06:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.938 17:06:35 -- host/auth.sh@44 -- # digest=sha512 00:19:19.938 17:06:35 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:19.938 17:06:35 -- host/auth.sh@44 -- # keyid=2 00:19:19.938 17:06:35 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:19.938 17:06:35 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.938 17:06:35 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:19.938 17:06:35 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:19.938 17:06:35 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:19:19.938 17:06:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.938 17:06:35 -- host/auth.sh@68 -- # digest=sha512 00:19:19.938 17:06:35 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:19.938 17:06:35 -- host/auth.sh@68 -- # keyid=2 00:19:19.938 17:06:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.938 17:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.938 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:19:19.938 17:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.938 17:06:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.938 17:06:35 -- nvmf/common.sh@717 -- # local ip 00:19:19.938 17:06:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.938 17:06:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.938 17:06:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.938 17:06:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.938 17:06:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.938 17:06:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.938 17:06:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.938 17:06:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.938 17:06:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.938 17:06:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:19.938 17:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.938 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:19:20.196 nvme0n1 00:19:20.196 17:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.196 17:06:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.196 17:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.196 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:19:20.196 17:06:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:20.196 17:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.454 17:06:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.454 17:06:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.454 17:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.454 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:19:20.454 17:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.454 17:06:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:20.454 17:06:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:20.454 17:06:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.454 17:06:35 -- host/auth.sh@44 -- # digest=sha512 00:19:20.454 17:06:35 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:20.454 17:06:35 -- host/auth.sh@44 -- # keyid=3 00:19:20.454 17:06:35 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:20.454 17:06:35 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:20.454 17:06:35 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:20.454 17:06:35 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:20.454 17:06:35 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:19:20.454 17:06:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.454 17:06:35 -- host/auth.sh@68 -- # digest=sha512 00:19:20.454 17:06:35 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:20.454 17:06:35 -- host/auth.sh@68 -- # keyid=3 00:19:20.454 17:06:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:20.454 17:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.454 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:19:20.454 17:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.454 17:06:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.454 17:06:35 -- nvmf/common.sh@717 -- # local ip 00:19:20.454 17:06:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.454 17:06:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.454 17:06:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.454 17:06:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.454 17:06:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:20.454 17:06:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.454 17:06:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:20.454 17:06:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:20.454 17:06:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:20.454 17:06:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:20.454 17:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.454 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:19:20.713 nvme0n1 00:19:20.713 17:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.713 17:06:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.713 17:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.713 17:06:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:20.713 17:06:36 -- common/autotest_common.sh@10 -- # set +x 00:19:20.713 17:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.713 17:06:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.713 17:06:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.713 17:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.713 17:06:36 -- common/autotest_common.sh@10 -- # set +x 00:19:20.713 17:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.713 17:06:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:20.713 17:06:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:20.713 17:06:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.713 17:06:36 -- host/auth.sh@44 -- # digest=sha512 00:19:20.713 17:06:36 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:20.713 17:06:36 -- host/auth.sh@44 -- # keyid=4 00:19:20.713 17:06:36 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:20.713 17:06:36 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:20.713 17:06:36 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:20.713 17:06:36 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:20.713 17:06:36 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:19:20.713 17:06:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.713 17:06:36 -- host/auth.sh@68 -- # digest=sha512 00:19:20.713 17:06:36 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:20.713 17:06:36 -- host/auth.sh@68 -- # keyid=4 00:19:20.713 17:06:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:20.713 17:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.713 17:06:36 -- common/autotest_common.sh@10 -- # set +x 00:19:20.713 17:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.713 17:06:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.713 17:06:36 -- nvmf/common.sh@717 -- # local ip 00:19:20.713 17:06:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.713 17:06:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.713 17:06:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.713 17:06:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.713 17:06:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:20.713 17:06:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.713 17:06:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:20.713 17:06:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:20.713 17:06:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:20.713 17:06:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:20.713 17:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.713 17:06:36 -- common/autotest_common.sh@10 -- # set +x 00:19:20.973 nvme0n1 00:19:20.973 17:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.973 17:06:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.973 17:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.973 17:06:36 -- common/autotest_common.sh@10 -- # set +x 00:19:20.973 17:06:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:20.973 17:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.973 17:06:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.973 17:06:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.973 17:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.973 17:06:36 -- common/autotest_common.sh@10 -- # set +x 00:19:20.973 17:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.973 17:06:36 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.973 17:06:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:20.973 17:06:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:20.973 17:06:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.973 17:06:36 -- host/auth.sh@44 -- # digest=sha512 00:19:20.973 17:06:36 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.973 17:06:36 -- host/auth.sh@44 -- # keyid=0 00:19:20.973 17:06:36 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:20.973 17:06:36 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:20.973 17:06:36 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:20.973 17:06:36 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:20.973 17:06:36 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:19:20.973 17:06:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.973 17:06:36 -- host/auth.sh@68 -- # digest=sha512 00:19:20.973 17:06:36 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:20.973 17:06:36 -- host/auth.sh@68 -- # keyid=0 00:19:20.973 17:06:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.973 17:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.973 17:06:36 -- common/autotest_common.sh@10 -- # set +x 00:19:20.973 17:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.973 17:06:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.973 17:06:36 -- nvmf/common.sh@717 -- # local ip 00:19:20.973 17:06:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.973 17:06:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.973 17:06:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.973 17:06:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.973 17:06:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:20.973 17:06:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.973 17:06:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:20.973 17:06:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:20.973 17:06:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:20.973 17:06:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:20.973 17:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.973 17:06:36 -- common/autotest_common.sh@10 -- # set +x 00:19:21.542 nvme0n1 00:19:21.542 17:06:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.542 17:06:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.542 17:06:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:21.543 17:06:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.543 17:06:37 -- common/autotest_common.sh@10 -- # set +x 00:19:21.543 17:06:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.543 17:06:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.543 17:06:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.543 17:06:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.543 17:06:37 -- common/autotest_common.sh@10 -- # set +x 00:19:21.543 17:06:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.543 17:06:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:21.543 17:06:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:21.543 17:06:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:21.543 17:06:37 -- host/auth.sh@44 -- # digest=sha512 00:19:21.543 17:06:37 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:21.543 17:06:37 -- host/auth.sh@44 -- # keyid=1 00:19:21.543 17:06:37 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:21.543 17:06:37 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:21.543 17:06:37 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:21.543 17:06:37 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:21.543 17:06:37 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:19:21.543 17:06:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:21.543 17:06:37 -- host/auth.sh@68 -- # digest=sha512 00:19:21.543 17:06:37 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:21.543 17:06:37 -- host/auth.sh@68 -- # keyid=1 00:19:21.543 17:06:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.543 17:06:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.543 17:06:37 -- common/autotest_common.sh@10 -- # set +x 00:19:21.543 17:06:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.543 17:06:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:21.543 17:06:37 -- nvmf/common.sh@717 -- # local ip 00:19:21.543 17:06:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:21.543 17:06:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:21.543 17:06:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.543 17:06:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.543 17:06:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:21.543 17:06:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.543 17:06:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:21.543 17:06:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:21.543 17:06:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:21.543 17:06:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:21.543 17:06:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.543 17:06:37 -- common/autotest_common.sh@10 -- # set +x 00:19:22.113 nvme0n1 00:19:22.113 17:06:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.113 17:06:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.113 17:06:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.113 17:06:37 -- common/autotest_common.sh@10 -- # set +x 00:19:22.113 17:06:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.113 17:06:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.113 17:06:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.113 17:06:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.113 17:06:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.113 17:06:37 -- common/autotest_common.sh@10 -- # set +x 00:19:22.113 17:06:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.113 17:06:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.113 17:06:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:22.113 17:06:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.113 17:06:37 -- host/auth.sh@44 -- # digest=sha512 00:19:22.113 17:06:37 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:22.113 17:06:37 -- host/auth.sh@44 -- # keyid=2 00:19:22.113 17:06:37 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:22.113 17:06:37 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:22.113 17:06:37 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:22.113 17:06:37 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:22.113 17:06:37 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:19:22.113 17:06:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.113 17:06:37 -- host/auth.sh@68 -- # digest=sha512 00:19:22.113 17:06:37 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:22.113 17:06:37 -- host/auth.sh@68 -- # keyid=2 00:19:22.113 17:06:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.113 17:06:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.113 17:06:37 -- common/autotest_common.sh@10 -- # set +x 00:19:22.113 17:06:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.113 17:06:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.113 17:06:37 -- nvmf/common.sh@717 -- # local ip 00:19:22.113 17:06:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.113 17:06:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.113 17:06:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.113 17:06:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.113 17:06:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.113 17:06:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.113 17:06:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.113 17:06:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.113 17:06:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.113 17:06:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:22.113 17:06:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.113 17:06:37 -- common/autotest_common.sh@10 -- # set +x 00:19:22.683 nvme0n1 00:19:22.683 17:06:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.683 17:06:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.683 17:06:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.683 17:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:22.683 17:06:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.683 17:06:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.683 17:06:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.683 17:06:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.683 17:06:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.683 17:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:22.683 17:06:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.683 17:06:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.683 17:06:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:22.683 17:06:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.683 17:06:38 -- host/auth.sh@44 -- # digest=sha512 00:19:22.683 17:06:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:22.683 17:06:38 -- host/auth.sh@44 -- # keyid=3 00:19:22.683 17:06:38 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:22.683 17:06:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:22.683 17:06:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:22.683 17:06:38 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:22.683 17:06:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:19:22.683 17:06:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.683 17:06:38 -- host/auth.sh@68 -- # digest=sha512 00:19:22.683 17:06:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:22.683 17:06:38 -- host/auth.sh@68 -- # keyid=3 00:19:22.683 17:06:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.683 17:06:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.683 17:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:22.683 17:06:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.683 17:06:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.683 17:06:38 -- nvmf/common.sh@717 -- # local ip 00:19:22.683 17:06:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.683 17:06:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.683 17:06:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.683 17:06:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.683 17:06:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.683 17:06:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.683 17:06:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.683 17:06:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.683 17:06:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.683 17:06:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:22.683 17:06:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.683 17:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:23.249 nvme0n1 00:19:23.249 17:06:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.249 17:06:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.249 17:06:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.249 17:06:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.249 17:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:23.249 17:06:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.249 17:06:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.249 17:06:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.249 17:06:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.249 17:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:23.249 17:06:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.249 17:06:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.249 17:06:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:23.249 17:06:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.249 17:06:38 -- host/auth.sh@44 -- # digest=sha512 00:19:23.249 17:06:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:23.249 17:06:38 -- host/auth.sh@44 -- # keyid=4 00:19:23.249 17:06:38 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:23.249 17:06:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:23.249 17:06:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:23.249 17:06:38 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:23.249 17:06:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:19:23.249 17:06:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.249 17:06:38 -- host/auth.sh@68 -- # digest=sha512 00:19:23.249 17:06:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:23.249 17:06:38 -- host/auth.sh@68 -- # keyid=4 00:19:23.249 17:06:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:23.249 17:06:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.249 17:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:23.249 17:06:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.249 17:06:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.249 17:06:38 -- nvmf/common.sh@717 -- # local ip 00:19:23.249 17:06:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.249 17:06:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.249 17:06:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.249 17:06:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.249 17:06:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.249 17:06:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.249 17:06:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.249 17:06:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.249 17:06:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.249 17:06:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:23.249 17:06:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.249 17:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:23.820 nvme0n1 00:19:23.820 17:06:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.820 17:06:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.820 17:06:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.820 17:06:39 -- common/autotest_common.sh@10 -- # set +x 00:19:23.820 17:06:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.820 17:06:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.820 17:06:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.820 17:06:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.820 17:06:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.820 17:06:39 -- common/autotest_common.sh@10 -- # set +x 00:19:23.820 17:06:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.820 17:06:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.820 17:06:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.820 17:06:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:23.820 17:06:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.820 17:06:39 -- host/auth.sh@44 -- # digest=sha512 00:19:23.820 17:06:39 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.820 17:06:39 -- host/auth.sh@44 -- # keyid=0 00:19:23.820 17:06:39 -- host/auth.sh@45 -- # key=DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:23.820 17:06:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:23.820 17:06:39 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:23.820 17:06:39 -- host/auth.sh@49 -- # echo DHHC-1:00:YjEwYTkxNDUyYTM1MWYyZjZiMGIxMGUwYTA5OWEzNzmFWTLr: 00:19:23.820 17:06:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:19:23.820 17:06:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.820 17:06:39 -- host/auth.sh@68 -- # digest=sha512 00:19:23.820 17:06:39 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:23.820 17:06:39 -- host/auth.sh@68 -- # keyid=0 00:19:23.820 17:06:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.820 17:06:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.820 17:06:39 -- common/autotest_common.sh@10 -- # set +x 00:19:23.820 17:06:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.820 17:06:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.820 17:06:39 -- nvmf/common.sh@717 -- # local ip 00:19:23.820 17:06:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.820 17:06:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.820 17:06:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.820 17:06:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.820 17:06:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.820 17:06:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.820 17:06:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.820 17:06:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.820 17:06:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.820 17:06:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:23.820 17:06:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.820 17:06:39 -- common/autotest_common.sh@10 -- # set +x 00:19:24.757 nvme0n1 00:19:24.757 17:06:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.757 17:06:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.757 17:06:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.757 17:06:40 -- common/autotest_common.sh@10 -- # set +x 00:19:24.757 17:06:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:24.757 17:06:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.757 17:06:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.757 17:06:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.757 17:06:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.757 17:06:40 -- common/autotest_common.sh@10 -- # set +x 00:19:24.757 17:06:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.757 17:06:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:24.757 17:06:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:24.757 17:06:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:24.757 17:06:40 -- host/auth.sh@44 -- # digest=sha512 00:19:24.757 17:06:40 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:24.757 17:06:40 -- host/auth.sh@44 -- # keyid=1 00:19:24.757 17:06:40 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:24.757 17:06:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:24.757 17:06:40 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:24.757 17:06:40 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:24.757 17:06:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:19:24.757 17:06:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:24.757 17:06:40 -- host/auth.sh@68 -- # digest=sha512 00:19:24.757 17:06:40 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:24.757 17:06:40 -- host/auth.sh@68 -- # keyid=1 00:19:24.757 17:06:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:24.757 17:06:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.757 17:06:40 -- common/autotest_common.sh@10 -- # set +x 00:19:24.757 17:06:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.757 17:06:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:24.757 17:06:40 -- nvmf/common.sh@717 -- # local ip 00:19:24.757 17:06:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:24.757 17:06:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:24.757 17:06:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.757 17:06:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.757 17:06:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:24.757 17:06:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.757 17:06:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:24.757 17:06:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:24.757 17:06:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:24.757 17:06:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:24.757 17:06:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.757 17:06:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.692 nvme0n1 00:19:25.692 17:06:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.692 17:06:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.692 17:06:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.692 17:06:41 -- common/autotest_common.sh@10 -- # set +x 00:19:25.692 17:06:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:25.692 17:06:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.951 17:06:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.951 17:06:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.951 17:06:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.951 17:06:41 -- common/autotest_common.sh@10 -- # set +x 00:19:25.951 17:06:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.951 17:06:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:25.951 17:06:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:25.951 17:06:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:25.951 17:06:41 -- host/auth.sh@44 -- # digest=sha512 00:19:25.951 17:06:41 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:25.951 17:06:41 -- host/auth.sh@44 -- # keyid=2 00:19:25.951 17:06:41 -- host/auth.sh@45 -- # key=DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:25.951 17:06:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:25.951 17:06:41 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:25.951 17:06:41 -- host/auth.sh@49 -- # echo DHHC-1:01:M2RjMzNlNWFjZDE5ZDcxMTI3ZjM2MThmMjc0MGE3ZDjgWFep: 00:19:25.951 17:06:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:19:25.951 17:06:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:25.951 17:06:41 -- host/auth.sh@68 -- # digest=sha512 00:19:25.951 17:06:41 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:25.951 17:06:41 -- host/auth.sh@68 -- # keyid=2 00:19:25.951 17:06:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.951 17:06:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.951 17:06:41 -- common/autotest_common.sh@10 -- # set +x 00:19:25.951 17:06:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.951 17:06:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:25.951 17:06:41 -- nvmf/common.sh@717 -- # local ip 00:19:25.951 17:06:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.951 17:06:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.951 17:06:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.951 17:06:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.951 17:06:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:25.951 17:06:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.952 17:06:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:25.952 17:06:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:25.952 17:06:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:25.952 17:06:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:25.952 17:06:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.952 17:06:41 -- common/autotest_common.sh@10 -- # set +x 00:19:26.887 nvme0n1 00:19:26.888 17:06:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.888 17:06:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.888 17:06:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:26.888 17:06:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.888 17:06:42 -- common/autotest_common.sh@10 -- # set +x 00:19:26.888 17:06:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.888 17:06:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.888 17:06:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.888 17:06:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.888 17:06:42 -- common/autotest_common.sh@10 -- # set +x 00:19:26.888 17:06:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.888 17:06:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:26.888 17:06:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:26.888 17:06:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:26.888 17:06:42 -- host/auth.sh@44 -- # digest=sha512 00:19:26.888 17:06:42 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:26.888 17:06:42 -- host/auth.sh@44 -- # keyid=3 00:19:26.888 17:06:42 -- host/auth.sh@45 -- # key=DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:26.888 17:06:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:26.888 17:06:42 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:26.888 17:06:42 -- host/auth.sh@49 -- # echo DHHC-1:02:OGYxNzdiNDM4NjhjNjEyOTVjNmQ3MTZlMmZlZTY5ZDQzZWM0NGYwYjU0YzRiY2IztwrlVA==: 00:19:26.888 17:06:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:19:26.888 17:06:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:26.888 17:06:42 -- host/auth.sh@68 -- # digest=sha512 00:19:26.888 17:06:42 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:26.888 17:06:42 -- host/auth.sh@68 -- # keyid=3 00:19:26.888 17:06:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:26.888 17:06:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.888 17:06:42 -- common/autotest_common.sh@10 -- # set +x 00:19:26.888 17:06:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.888 17:06:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:26.888 17:06:42 -- nvmf/common.sh@717 -- # local ip 00:19:26.888 17:06:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:26.888 17:06:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:26.888 17:06:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.888 17:06:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.888 17:06:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:26.888 17:06:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.888 17:06:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:26.888 17:06:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:26.888 17:06:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:26.888 17:06:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:26.888 17:06:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.888 17:06:42 -- common/autotest_common.sh@10 -- # set +x 00:19:27.820 nvme0n1 00:19:27.820 17:06:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.820 17:06:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.820 17:06:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.820 17:06:43 -- common/autotest_common.sh@10 -- # set +x 00:19:27.820 17:06:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:27.820 17:06:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.820 17:06:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.820 17:06:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.820 17:06:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.820 17:06:43 -- common/autotest_common.sh@10 -- # set +x 00:19:27.820 17:06:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.820 17:06:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:27.820 17:06:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:27.820 17:06:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:27.820 17:06:43 -- host/auth.sh@44 -- # digest=sha512 00:19:27.820 17:06:43 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:27.820 17:06:43 -- host/auth.sh@44 -- # keyid=4 00:19:27.820 17:06:43 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:27.820 17:06:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:27.820 17:06:43 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:27.820 17:06:43 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM1MDdmN2I3NTMyNGQ4Zjg3NjVjODg5NmFiNDUyZWE3YjlhZWE2MTUyNTJjNmUzMDdlNmZkY2Y1ZjFlYjBiM5c5RUQ=: 00:19:27.820 17:06:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:19:27.820 17:06:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:27.820 17:06:43 -- host/auth.sh@68 -- # digest=sha512 00:19:27.820 17:06:43 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:27.820 17:06:43 -- host/auth.sh@68 -- # keyid=4 00:19:27.820 17:06:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.820 17:06:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.820 17:06:43 -- common/autotest_common.sh@10 -- # set +x 00:19:27.820 17:06:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.820 17:06:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:27.820 17:06:43 -- nvmf/common.sh@717 -- # local ip 00:19:27.820 17:06:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:27.820 17:06:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:27.820 17:06:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.820 17:06:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.820 17:06:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:27.820 17:06:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.820 17:06:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:27.820 17:06:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:27.820 17:06:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:27.820 17:06:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:27.820 17:06:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.820 17:06:43 -- common/autotest_common.sh@10 -- # set +x 00:19:28.752 nvme0n1 00:19:28.753 17:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.753 17:06:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.753 17:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.753 17:06:44 -- common/autotest_common.sh@10 -- # set +x 00:19:28.753 17:06:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:28.753 17:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.753 17:06:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.753 17:06:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.753 17:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.753 17:06:44 -- common/autotest_common.sh@10 -- # set +x 00:19:28.753 17:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.753 17:06:44 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:28.753 17:06:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:28.753 17:06:44 -- host/auth.sh@44 -- # digest=sha256 00:19:28.753 17:06:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:28.753 17:06:44 -- host/auth.sh@44 -- # keyid=1 00:19:28.753 17:06:44 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:28.753 17:06:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:28.753 17:06:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:28.753 17:06:44 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQ4NjI1MTg3OTU4NjFhZGM1ZmUyNzlkYWI5M2EzYTA3YzVkMWI2MzJiMzA2YWFlG8ocLQ==: 00:19:28.753 17:06:44 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:28.753 17:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.753 17:06:44 -- common/autotest_common.sh@10 -- # set +x 00:19:28.753 17:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.753 17:06:44 -- host/auth.sh@119 -- # get_main_ns_ip 00:19:28.753 17:06:44 -- nvmf/common.sh@717 -- # local ip 00:19:28.753 17:06:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:28.753 17:06:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:28.753 17:06:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.753 17:06:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.753 17:06:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:28.753 17:06:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.753 17:06:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:28.753 17:06:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:28.753 17:06:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:28.753 17:06:44 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:28.753 17:06:44 -- common/autotest_common.sh@638 -- # local es=0 00:19:28.753 17:06:44 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:28.753 17:06:44 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:28.753 17:06:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:28.753 17:06:44 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:28.753 17:06:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:28.753 17:06:44 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:28.753 17:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.753 17:06:44 -- common/autotest_common.sh@10 -- # set +x 00:19:28.753 request: 00:19:28.753 { 00:19:28.753 "name": "nvme0", 00:19:28.753 "trtype": "tcp", 00:19:28.753 "traddr": "10.0.0.1", 00:19:28.753 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:28.753 "adrfam": "ipv4", 00:19:28.753 "trsvcid": "4420", 00:19:28.753 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:28.753 "method": "bdev_nvme_attach_controller", 00:19:28.753 "req_id": 1 00:19:28.753 } 00:19:28.753 Got JSON-RPC error response 00:19:28.753 response: 00:19:28.753 { 00:19:28.753 "code": -32602, 00:19:28.753 "message": "Invalid parameters" 00:19:28.753 } 00:19:28.753 17:06:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:28.753 17:06:44 -- common/autotest_common.sh@641 -- # es=1 00:19:28.753 17:06:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:28.753 17:06:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:28.753 17:06:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:28.753 17:06:44 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.753 17:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.753 17:06:44 -- common/autotest_common.sh@10 -- # set +x 00:19:28.753 17:06:44 -- host/auth.sh@121 -- # jq length 00:19:28.753 17:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.753 17:06:44 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:19:28.753 17:06:44 -- host/auth.sh@124 -- # get_main_ns_ip 00:19:28.753 17:06:44 -- nvmf/common.sh@717 -- # local ip 00:19:28.753 17:06:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:28.753 17:06:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:28.753 17:06:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.013 17:06:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.013 17:06:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:29.013 17:06:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.013 17:06:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:29.013 17:06:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:29.013 17:06:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:29.013 17:06:44 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:29.013 17:06:44 -- common/autotest_common.sh@638 -- # local es=0 00:19:29.013 17:06:44 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:29.013 17:06:44 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:29.014 17:06:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:29.014 17:06:44 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:29.014 17:06:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:29.014 17:06:44 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:29.014 17:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.014 17:06:44 -- common/autotest_common.sh@10 -- # set +x 00:19:29.014 request: 00:19:29.014 { 00:19:29.014 "name": "nvme0", 00:19:29.014 "trtype": "tcp", 00:19:29.014 "traddr": "10.0.0.1", 00:19:29.014 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:29.014 "adrfam": "ipv4", 00:19:29.014 "trsvcid": "4420", 00:19:29.014 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:29.014 "dhchap_key": "key2", 00:19:29.014 "method": "bdev_nvme_attach_controller", 00:19:29.014 "req_id": 1 00:19:29.014 } 00:19:29.014 Got JSON-RPC error response 00:19:29.014 response: 00:19:29.014 { 00:19:29.014 "code": -32602, 00:19:29.014 "message": "Invalid parameters" 00:19:29.014 } 00:19:29.014 17:06:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:29.014 17:06:44 -- common/autotest_common.sh@641 -- # es=1 00:19:29.014 17:06:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:29.014 17:06:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:29.014 17:06:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:29.014 17:06:44 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.014 17:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.014 17:06:44 -- common/autotest_common.sh@10 -- # set +x 00:19:29.014 17:06:44 -- host/auth.sh@127 -- # jq length 00:19:29.014 17:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.014 17:06:44 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:19:29.014 17:06:44 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:29.014 17:06:44 -- host/auth.sh@130 -- # cleanup 00:19:29.014 17:06:44 -- host/auth.sh@24 -- # nvmftestfini 00:19:29.014 17:06:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:29.014 17:06:44 -- nvmf/common.sh@117 -- # sync 00:19:29.014 17:06:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:29.014 17:06:44 -- nvmf/common.sh@120 -- # set +e 00:19:29.014 17:06:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:29.014 17:06:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:29.014 rmmod nvme_tcp 00:19:29.014 rmmod nvme_fabrics 00:19:29.014 17:06:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:29.014 17:06:44 -- nvmf/common.sh@124 -- # set -e 00:19:29.014 17:06:44 -- nvmf/common.sh@125 -- # return 0 00:19:29.014 17:06:44 -- nvmf/common.sh@478 -- # '[' -n 1745767 ']' 00:19:29.014 17:06:44 -- nvmf/common.sh@479 -- # killprocess 1745767 00:19:29.014 17:06:44 -- common/autotest_common.sh@936 -- # '[' -z 1745767 ']' 00:19:29.014 17:06:44 -- common/autotest_common.sh@940 -- # kill -0 1745767 00:19:29.014 17:06:44 -- common/autotest_common.sh@941 -- # uname 00:19:29.014 17:06:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:29.014 17:06:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1745767 00:19:29.014 17:06:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:29.014 17:06:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:29.014 17:06:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1745767' 00:19:29.014 killing process with pid 1745767 00:19:29.014 17:06:44 -- common/autotest_common.sh@955 -- # kill 1745767 00:19:29.014 17:06:44 -- common/autotest_common.sh@960 -- # wait 1745767 00:19:29.312 17:06:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:29.312 17:06:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:29.312 17:06:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:29.312 17:06:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.312 17:06:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:29.312 17:06:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.312 17:06:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.312 17:06:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.216 17:06:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:31.475 17:06:46 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:31.475 17:06:46 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:31.475 17:06:46 -- host/auth.sh@27 -- # clean_kernel_target 00:19:31.475 17:06:46 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:31.475 17:06:46 -- nvmf/common.sh@675 -- # echo 0 00:19:31.475 17:06:46 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:31.475 17:06:46 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:31.475 17:06:46 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:31.475 17:06:46 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:31.475 17:06:46 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:31.475 17:06:46 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:19:31.475 17:06:46 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:32.409 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:32.409 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:32.409 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:32.409 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:32.409 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:32.409 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:32.409 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:32.409 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:32.409 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:32.668 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:32.668 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:32.668 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:32.668 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:32.668 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:32.668 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:32.668 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:33.604 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:19:33.604 17:06:49 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.4gI /tmp/spdk.key-null.JtK /tmp/spdk.key-sha256.6eR /tmp/spdk.key-sha384.Kwb /tmp/spdk.key-sha512.sT6 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:19:33.604 17:06:49 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:34.982 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:19:34.982 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:19:34.982 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:19:34.982 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:19:34.982 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:19:34.982 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:19:34.982 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:19:34.982 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:19:34.982 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:19:34.982 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:19:34.982 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:19:34.982 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:19:34.982 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:19:34.982 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:19:34.982 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:19:34.982 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:19:34.982 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:19:34.982 00:19:34.982 real 0m46.211s 00:19:34.982 user 0m43.912s 00:19:34.982 sys 0m5.482s 00:19:34.982 17:06:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:34.982 17:06:50 -- common/autotest_common.sh@10 -- # set +x 00:19:34.982 ************************************ 00:19:34.982 END TEST nvmf_auth 00:19:34.982 ************************************ 00:19:34.982 17:06:50 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:19:34.982 17:06:50 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:34.982 17:06:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:34.982 17:06:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:34.982 17:06:50 -- common/autotest_common.sh@10 -- # set +x 00:19:34.982 ************************************ 00:19:34.982 START TEST nvmf_digest 00:19:34.982 ************************************ 00:19:34.982 17:06:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:35.240 * Looking for test storage... 00:19:35.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:35.241 17:06:50 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.241 17:06:50 -- nvmf/common.sh@7 -- # uname -s 00:19:35.241 17:06:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.241 17:06:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.241 17:06:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.241 17:06:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.241 17:06:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.241 17:06:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.241 17:06:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.241 17:06:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.241 17:06:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.241 17:06:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.241 17:06:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.241 17:06:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.241 17:06:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.241 17:06:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.241 17:06:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.241 17:06:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.241 17:06:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.241 17:06:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.241 17:06:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.241 17:06:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.241 17:06:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.241 17:06:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.241 17:06:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.241 17:06:50 -- paths/export.sh@5 -- # export PATH 00:19:35.241 17:06:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.241 17:06:50 -- nvmf/common.sh@47 -- # : 0 00:19:35.241 17:06:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.241 17:06:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.241 17:06:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.241 17:06:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.241 17:06:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.241 17:06:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.241 17:06:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.241 17:06:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.241 17:06:50 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:35.241 17:06:50 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:35.241 17:06:50 -- host/digest.sh@16 -- # runtime=2 00:19:35.241 17:06:50 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:35.241 17:06:50 -- host/digest.sh@138 -- # nvmftestinit 00:19:35.241 17:06:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:35.241 17:06:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.241 17:06:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:35.241 17:06:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:35.241 17:06:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:35.241 17:06:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.241 17:06:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.241 17:06:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.241 17:06:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:35.241 17:06:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:35.241 17:06:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.241 17:06:50 -- common/autotest_common.sh@10 -- # set +x 00:19:37.163 17:06:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:37.163 17:06:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:37.163 17:06:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:37.163 17:06:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:37.163 17:06:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:37.163 17:06:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:37.163 17:06:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:37.163 17:06:52 -- nvmf/common.sh@295 -- # net_devs=() 00:19:37.163 17:06:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:37.163 17:06:52 -- nvmf/common.sh@296 -- # e810=() 00:19:37.163 17:06:52 -- nvmf/common.sh@296 -- # local -ga e810 00:19:37.163 17:06:52 -- nvmf/common.sh@297 -- # x722=() 00:19:37.163 17:06:52 -- nvmf/common.sh@297 -- # local -ga x722 00:19:37.163 17:06:52 -- nvmf/common.sh@298 -- # mlx=() 00:19:37.163 17:06:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:37.163 17:06:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.163 17:06:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.163 17:06:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.163 17:06:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.163 17:06:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.163 17:06:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.163 17:06:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.163 17:06:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.163 17:06:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.163 17:06:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.163 17:06:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.163 17:06:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:37.163 17:06:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:37.163 17:06:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:37.163 17:06:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.163 17:06:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:37.163 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:37.163 17:06:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.163 17:06:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:37.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:37.163 17:06:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:37.163 17:06:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:37.163 17:06:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.163 17:06:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.163 17:06:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:37.163 17:06:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.163 17:06:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:37.163 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:37.163 17:06:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.163 17:06:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.163 17:06:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.163 17:06:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:37.163 17:06:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.163 17:06:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:37.163 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:37.163 17:06:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.164 17:06:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:37.164 17:06:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:37.164 17:06:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:37.164 17:06:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:37.164 17:06:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:37.164 17:06:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.164 17:06:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.164 17:06:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.164 17:06:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:37.164 17:06:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.164 17:06:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.164 17:06:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:37.164 17:06:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.164 17:06:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.164 17:06:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:37.164 17:06:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:37.164 17:06:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.164 17:06:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.164 17:06:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.164 17:06:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.164 17:06:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:37.164 17:06:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.164 17:06:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.164 17:06:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.164 17:06:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:37.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:19:37.164 00:19:37.164 --- 10.0.0.2 ping statistics --- 00:19:37.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.164 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:19:37.164 17:06:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:19:37.164 00:19:37.164 --- 10.0.0.1 ping statistics --- 00:19:37.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.164 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:19:37.164 17:06:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.164 17:06:52 -- nvmf/common.sh@411 -- # return 0 00:19:37.164 17:06:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:37.164 17:06:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.164 17:06:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:37.164 17:06:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:37.164 17:06:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.164 17:06:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:37.164 17:06:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:37.164 17:06:52 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:37.164 17:06:52 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:37.164 17:06:52 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:37.164 17:06:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:37.164 17:06:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:37.164 17:06:52 -- common/autotest_common.sh@10 -- # set +x 00:19:37.164 ************************************ 00:19:37.164 START TEST nvmf_digest_clean 00:19:37.164 ************************************ 00:19:37.164 17:06:52 -- common/autotest_common.sh@1111 -- # run_digest 00:19:37.164 17:06:52 -- host/digest.sh@120 -- # local dsa_initiator 00:19:37.164 17:06:52 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:37.164 17:06:52 -- host/digest.sh@121 -- # dsa_initiator=false 00:19:37.164 17:06:52 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:37.164 17:06:52 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:37.164 17:06:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:37.164 17:06:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:37.164 17:06:52 -- common/autotest_common.sh@10 -- # set +x 00:19:37.164 17:06:52 -- nvmf/common.sh@470 -- # nvmfpid=1755310 00:19:37.164 17:06:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:37.164 17:06:52 -- nvmf/common.sh@471 -- # waitforlisten 1755310 00:19:37.164 17:06:52 -- common/autotest_common.sh@817 -- # '[' -z 1755310 ']' 00:19:37.164 17:06:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.164 17:06:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:37.164 17:06:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.164 17:06:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:37.164 17:06:52 -- common/autotest_common.sh@10 -- # set +x 00:19:37.423 [2024-04-18 17:06:52.911891] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:37.423 [2024-04-18 17:06:52.911968] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.423 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.423 [2024-04-18 17:06:52.979799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.423 [2024-04-18 17:06:53.093464] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.423 [2024-04-18 17:06:53.093533] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.423 [2024-04-18 17:06:53.093550] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.423 [2024-04-18 17:06:53.093563] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.423 [2024-04-18 17:06:53.093575] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.423 [2024-04-18 17:06:53.093618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.356 17:06:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:38.356 17:06:53 -- common/autotest_common.sh@850 -- # return 0 00:19:38.356 17:06:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:38.356 17:06:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:38.356 17:06:53 -- common/autotest_common.sh@10 -- # set +x 00:19:38.356 17:06:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.356 17:06:53 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:38.356 17:06:53 -- host/digest.sh@126 -- # common_target_config 00:19:38.356 17:06:53 -- host/digest.sh@43 -- # rpc_cmd 00:19:38.356 17:06:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.356 17:06:53 -- common/autotest_common.sh@10 -- # set +x 00:19:38.356 null0 00:19:38.356 [2024-04-18 17:06:54.013928] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.356 [2024-04-18 17:06:54.038158] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.357 17:06:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.357 17:06:54 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:38.357 17:06:54 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:38.357 17:06:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:38.357 17:06:54 -- host/digest.sh@80 -- # rw=randread 00:19:38.357 17:06:54 -- host/digest.sh@80 -- # bs=4096 00:19:38.357 17:06:54 -- host/digest.sh@80 -- # qd=128 00:19:38.357 17:06:54 -- host/digest.sh@80 -- # scan_dsa=false 00:19:38.357 17:06:54 -- host/digest.sh@83 -- # bperfpid=1755462 00:19:38.357 17:06:54 -- host/digest.sh@84 -- # waitforlisten 1755462 /var/tmp/bperf.sock 00:19:38.357 17:06:54 -- common/autotest_common.sh@817 -- # '[' -z 1755462 ']' 00:19:38.357 17:06:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:38.357 17:06:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:38.357 17:06:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:38.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:38.357 17:06:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:38.357 17:06:54 -- common/autotest_common.sh@10 -- # set +x 00:19:38.357 17:06:54 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:38.615 [2024-04-18 17:06:54.085629] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:38.615 [2024-04-18 17:06:54.085703] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755462 ] 00:19:38.615 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.615 [2024-04-18 17:06:54.153324] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.615 [2024-04-18 17:06:54.284835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.549 17:06:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:39.549 17:06:55 -- common/autotest_common.sh@850 -- # return 0 00:19:39.549 17:06:55 -- host/digest.sh@86 -- # false 00:19:39.549 17:06:55 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:39.549 17:06:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:39.807 17:06:55 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:39.807 17:06:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:40.065 nvme0n1 00:19:40.065 17:06:55 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:40.065 17:06:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:40.322 Running I/O for 2 seconds... 00:19:42.219 00:19:42.219 Latency(us) 00:19:42.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.219 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:42.219 nvme0n1 : 2.00 17786.42 69.48 0.00 0.00 7188.40 3640.89 14951.92 00:19:42.219 =================================================================================================================== 00:19:42.219 Total : 17786.42 69.48 0.00 0.00 7188.40 3640.89 14951.92 00:19:42.219 0 00:19:42.219 17:06:57 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:42.219 17:06:57 -- host/digest.sh@93 -- # get_accel_stats 00:19:42.219 17:06:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:42.219 17:06:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:42.219 17:06:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:42.219 | select(.opcode=="crc32c") 00:19:42.219 | "\(.module_name) \(.executed)"' 00:19:42.477 17:06:58 -- host/digest.sh@94 -- # false 00:19:42.477 17:06:58 -- host/digest.sh@94 -- # exp_module=software 00:19:42.477 17:06:58 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:42.477 17:06:58 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:42.477 17:06:58 -- host/digest.sh@98 -- # killprocess 1755462 00:19:42.477 17:06:58 -- common/autotest_common.sh@936 -- # '[' -z 1755462 ']' 00:19:42.477 17:06:58 -- common/autotest_common.sh@940 -- # kill -0 1755462 00:19:42.477 17:06:58 -- common/autotest_common.sh@941 -- # uname 00:19:42.477 17:06:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:42.477 17:06:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1755462 00:19:42.477 17:06:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:42.477 17:06:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:42.477 17:06:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1755462' 00:19:42.477 killing process with pid 1755462 00:19:42.477 17:06:58 -- common/autotest_common.sh@955 -- # kill 1755462 00:19:42.477 Received shutdown signal, test time was about 2.000000 seconds 00:19:42.477 00:19:42.477 Latency(us) 00:19:42.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.477 =================================================================================================================== 00:19:42.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.477 17:06:58 -- common/autotest_common.sh@960 -- # wait 1755462 00:19:42.734 17:06:58 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:42.734 17:06:58 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:42.734 17:06:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:42.734 17:06:58 -- host/digest.sh@80 -- # rw=randread 00:19:42.734 17:06:58 -- host/digest.sh@80 -- # bs=131072 00:19:42.734 17:06:58 -- host/digest.sh@80 -- # qd=16 00:19:42.734 17:06:58 -- host/digest.sh@80 -- # scan_dsa=false 00:19:42.734 17:06:58 -- host/digest.sh@83 -- # bperfpid=1755995 00:19:42.734 17:06:58 -- host/digest.sh@84 -- # waitforlisten 1755995 /var/tmp/bperf.sock 00:19:42.734 17:06:58 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:42.734 17:06:58 -- common/autotest_common.sh@817 -- # '[' -z 1755995 ']' 00:19:42.734 17:06:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:42.734 17:06:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:42.734 17:06:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:42.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:42.734 17:06:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:42.734 17:06:58 -- common/autotest_common.sh@10 -- # set +x 00:19:42.992 [2024-04-18 17:06:58.450260] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:42.992 [2024-04-18 17:06:58.450339] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755995 ] 00:19:42.992 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:42.993 Zero copy mechanism will not be used. 00:19:42.993 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.993 [2024-04-18 17:06:58.509437] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.993 [2024-04-18 17:06:58.621229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.993 17:06:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:42.993 17:06:58 -- common/autotest_common.sh@850 -- # return 0 00:19:42.993 17:06:58 -- host/digest.sh@86 -- # false 00:19:42.993 17:06:58 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:42.993 17:06:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:43.558 17:06:59 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:43.558 17:06:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:43.816 nvme0n1 00:19:43.816 17:06:59 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:43.816 17:06:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:43.816 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:43.816 Zero copy mechanism will not be used. 00:19:43.816 Running I/O for 2 seconds... 00:19:46.344 00:19:46.344 Latency(us) 00:19:46.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.344 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:46.344 nvme0n1 : 2.00 4137.35 517.17 0.00 0.00 3863.14 1013.38 10728.49 00:19:46.344 =================================================================================================================== 00:19:46.344 Total : 4137.35 517.17 0.00 0.00 3863.14 1013.38 10728.49 00:19:46.344 0 00:19:46.344 17:07:01 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:46.344 17:07:01 -- host/digest.sh@93 -- # get_accel_stats 00:19:46.344 17:07:01 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:46.344 17:07:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:46.344 17:07:01 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:46.344 | select(.opcode=="crc32c") 00:19:46.344 | "\(.module_name) \(.executed)"' 00:19:46.344 17:07:01 -- host/digest.sh@94 -- # false 00:19:46.344 17:07:01 -- host/digest.sh@94 -- # exp_module=software 00:19:46.344 17:07:01 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:46.344 17:07:01 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:46.344 17:07:01 -- host/digest.sh@98 -- # killprocess 1755995 00:19:46.344 17:07:01 -- common/autotest_common.sh@936 -- # '[' -z 1755995 ']' 00:19:46.344 17:07:01 -- common/autotest_common.sh@940 -- # kill -0 1755995 00:19:46.344 17:07:01 -- common/autotest_common.sh@941 -- # uname 00:19:46.344 17:07:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:46.344 17:07:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1755995 00:19:46.344 17:07:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:46.344 17:07:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:46.344 17:07:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1755995' 00:19:46.344 killing process with pid 1755995 00:19:46.344 17:07:01 -- common/autotest_common.sh@955 -- # kill 1755995 00:19:46.344 Received shutdown signal, test time was about 2.000000 seconds 00:19:46.344 00:19:46.344 Latency(us) 00:19:46.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.344 =================================================================================================================== 00:19:46.344 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:46.344 17:07:01 -- common/autotest_common.sh@960 -- # wait 1755995 00:19:46.603 17:07:02 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:46.603 17:07:02 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:46.603 17:07:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:46.603 17:07:02 -- host/digest.sh@80 -- # rw=randwrite 00:19:46.603 17:07:02 -- host/digest.sh@80 -- # bs=4096 00:19:46.603 17:07:02 -- host/digest.sh@80 -- # qd=128 00:19:46.603 17:07:02 -- host/digest.sh@80 -- # scan_dsa=false 00:19:46.603 17:07:02 -- host/digest.sh@83 -- # bperfpid=1756406 00:19:46.603 17:07:02 -- host/digest.sh@84 -- # waitforlisten 1756406 /var/tmp/bperf.sock 00:19:46.603 17:07:02 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:46.603 17:07:02 -- common/autotest_common.sh@817 -- # '[' -z 1756406 ']' 00:19:46.603 17:07:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:46.603 17:07:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:46.603 17:07:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:46.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:46.603 17:07:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:46.603 17:07:02 -- common/autotest_common.sh@10 -- # set +x 00:19:46.603 [2024-04-18 17:07:02.100544] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:46.603 [2024-04-18 17:07:02.100633] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756406 ] 00:19:46.603 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.603 [2024-04-18 17:07:02.160995] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.603 [2024-04-18 17:07:02.274608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.534 17:07:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:47.534 17:07:03 -- common/autotest_common.sh@850 -- # return 0 00:19:47.534 17:07:03 -- host/digest.sh@86 -- # false 00:19:47.534 17:07:03 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:47.534 17:07:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:47.792 17:07:03 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:47.792 17:07:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:48.049 nvme0n1 00:19:48.049 17:07:03 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:48.049 17:07:03 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:48.307 Running I/O for 2 seconds... 00:19:50.209 00:19:50.209 Latency(us) 00:19:50.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.209 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:50.209 nvme0n1 : 2.00 20111.49 78.56 0.00 0.00 6353.98 2997.67 10291.58 00:19:50.209 =================================================================================================================== 00:19:50.209 Total : 20111.49 78.56 0.00 0.00 6353.98 2997.67 10291.58 00:19:50.209 0 00:19:50.209 17:07:05 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:50.209 17:07:05 -- host/digest.sh@93 -- # get_accel_stats 00:19:50.209 17:07:05 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:50.209 17:07:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:50.209 17:07:05 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:50.209 | select(.opcode=="crc32c") 00:19:50.209 | "\(.module_name) \(.executed)"' 00:19:50.468 17:07:06 -- host/digest.sh@94 -- # false 00:19:50.468 17:07:06 -- host/digest.sh@94 -- # exp_module=software 00:19:50.468 17:07:06 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:50.468 17:07:06 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:50.468 17:07:06 -- host/digest.sh@98 -- # killprocess 1756406 00:19:50.468 17:07:06 -- common/autotest_common.sh@936 -- # '[' -z 1756406 ']' 00:19:50.468 17:07:06 -- common/autotest_common.sh@940 -- # kill -0 1756406 00:19:50.468 17:07:06 -- common/autotest_common.sh@941 -- # uname 00:19:50.468 17:07:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:50.468 17:07:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1756406 00:19:50.468 17:07:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:50.468 17:07:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:50.468 17:07:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1756406' 00:19:50.468 killing process with pid 1756406 00:19:50.468 17:07:06 -- common/autotest_common.sh@955 -- # kill 1756406 00:19:50.468 Received shutdown signal, test time was about 2.000000 seconds 00:19:50.468 00:19:50.468 Latency(us) 00:19:50.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.468 =================================================================================================================== 00:19:50.468 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.468 17:07:06 -- common/autotest_common.sh@960 -- # wait 1756406 00:19:50.727 17:07:06 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:50.727 17:07:06 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:50.727 17:07:06 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:50.727 17:07:06 -- host/digest.sh@80 -- # rw=randwrite 00:19:50.727 17:07:06 -- host/digest.sh@80 -- # bs=131072 00:19:50.727 17:07:06 -- host/digest.sh@80 -- # qd=16 00:19:50.727 17:07:06 -- host/digest.sh@80 -- # scan_dsa=false 00:19:50.727 17:07:06 -- host/digest.sh@83 -- # bperfpid=1756937 00:19:50.727 17:07:06 -- host/digest.sh@84 -- # waitforlisten 1756937 /var/tmp/bperf.sock 00:19:50.727 17:07:06 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:50.727 17:07:06 -- common/autotest_common.sh@817 -- # '[' -z 1756937 ']' 00:19:50.727 17:07:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:50.727 17:07:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:50.727 17:07:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:50.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:50.727 17:07:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:50.727 17:07:06 -- common/autotest_common.sh@10 -- # set +x 00:19:50.727 [2024-04-18 17:07:06.405296] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:50.727 [2024-04-18 17:07:06.405402] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756937 ] 00:19:50.727 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:50.727 Zero copy mechanism will not be used. 00:19:50.727 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.985 [2024-04-18 17:07:06.465259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.985 [2024-04-18 17:07:06.576290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.985 17:07:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:50.985 17:07:06 -- common/autotest_common.sh@850 -- # return 0 00:19:50.985 17:07:06 -- host/digest.sh@86 -- # false 00:19:50.985 17:07:06 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:50.985 17:07:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:51.552 17:07:06 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:51.552 17:07:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:51.809 nvme0n1 00:19:51.809 17:07:07 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:51.809 17:07:07 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:51.809 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:51.809 Zero copy mechanism will not be used. 00:19:51.809 Running I/O for 2 seconds... 00:19:54.337 00:19:54.337 Latency(us) 00:19:54.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.337 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:54.337 nvme0n1 : 2.00 4312.84 539.11 0.00 0.00 3701.67 2585.03 11699.39 00:19:54.337 =================================================================================================================== 00:19:54.337 Total : 4312.84 539.11 0.00 0.00 3701.67 2585.03 11699.39 00:19:54.337 0 00:19:54.337 17:07:09 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:54.337 17:07:09 -- host/digest.sh@93 -- # get_accel_stats 00:19:54.337 17:07:09 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:54.337 17:07:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:54.337 17:07:09 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:54.337 | select(.opcode=="crc32c") 00:19:54.337 | "\(.module_name) \(.executed)"' 00:19:54.337 17:07:09 -- host/digest.sh@94 -- # false 00:19:54.337 17:07:09 -- host/digest.sh@94 -- # exp_module=software 00:19:54.337 17:07:09 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:54.337 17:07:09 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:54.337 17:07:09 -- host/digest.sh@98 -- # killprocess 1756937 00:19:54.337 17:07:09 -- common/autotest_common.sh@936 -- # '[' -z 1756937 ']' 00:19:54.337 17:07:09 -- common/autotest_common.sh@940 -- # kill -0 1756937 00:19:54.337 17:07:09 -- common/autotest_common.sh@941 -- # uname 00:19:54.337 17:07:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:54.337 17:07:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1756937 00:19:54.337 17:07:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:54.337 17:07:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:54.337 17:07:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1756937' 00:19:54.337 killing process with pid 1756937 00:19:54.337 17:07:09 -- common/autotest_common.sh@955 -- # kill 1756937 00:19:54.337 Received shutdown signal, test time was about 2.000000 seconds 00:19:54.337 00:19:54.337 Latency(us) 00:19:54.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.337 =================================================================================================================== 00:19:54.337 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.337 17:07:09 -- common/autotest_common.sh@960 -- # wait 1756937 00:19:54.595 17:07:10 -- host/digest.sh@132 -- # killprocess 1755310 00:19:54.595 17:07:10 -- common/autotest_common.sh@936 -- # '[' -z 1755310 ']' 00:19:54.595 17:07:10 -- common/autotest_common.sh@940 -- # kill -0 1755310 00:19:54.595 17:07:10 -- common/autotest_common.sh@941 -- # uname 00:19:54.595 17:07:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:54.595 17:07:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1755310 00:19:54.595 17:07:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:54.595 17:07:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:54.596 17:07:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1755310' 00:19:54.596 killing process with pid 1755310 00:19:54.596 17:07:10 -- common/autotest_common.sh@955 -- # kill 1755310 00:19:54.596 17:07:10 -- common/autotest_common.sh@960 -- # wait 1755310 00:19:54.854 00:19:54.854 real 0m17.550s 00:19:54.854 user 0m34.575s 00:19:54.854 sys 0m4.217s 00:19:54.854 17:07:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:54.854 17:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:54.854 ************************************ 00:19:54.854 END TEST nvmf_digest_clean 00:19:54.854 ************************************ 00:19:54.854 17:07:10 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:54.854 17:07:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:54.854 17:07:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:54.854 17:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:54.854 ************************************ 00:19:54.854 START TEST nvmf_digest_error 00:19:54.854 ************************************ 00:19:54.854 17:07:10 -- common/autotest_common.sh@1111 -- # run_digest_error 00:19:54.854 17:07:10 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:54.854 17:07:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:54.854 17:07:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:54.854 17:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:54.854 17:07:10 -- nvmf/common.sh@470 -- # nvmfpid=1757387 00:19:54.854 17:07:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:54.854 17:07:10 -- nvmf/common.sh@471 -- # waitforlisten 1757387 00:19:54.854 17:07:10 -- common/autotest_common.sh@817 -- # '[' -z 1757387 ']' 00:19:54.854 17:07:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.854 17:07:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:54.854 17:07:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.854 17:07:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:54.854 17:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:55.112 [2024-04-18 17:07:10.579066] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:55.112 [2024-04-18 17:07:10.579139] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.112 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.113 [2024-04-18 17:07:10.640882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.113 [2024-04-18 17:07:10.744342] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.113 [2024-04-18 17:07:10.744428] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.113 [2024-04-18 17:07:10.744457] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.113 [2024-04-18 17:07:10.744468] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.113 [2024-04-18 17:07:10.744478] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.113 [2024-04-18 17:07:10.744505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.113 17:07:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:55.113 17:07:10 -- common/autotest_common.sh@850 -- # return 0 00:19:55.113 17:07:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:55.113 17:07:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:55.113 17:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:55.113 17:07:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.113 17:07:10 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:55.113 17:07:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.113 17:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:55.113 [2024-04-18 17:07:10.809126] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:55.113 17:07:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.113 17:07:10 -- host/digest.sh@105 -- # common_target_config 00:19:55.113 17:07:10 -- host/digest.sh@43 -- # rpc_cmd 00:19:55.113 17:07:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.113 17:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:55.372 null0 00:19:55.372 [2024-04-18 17:07:10.924286] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.372 [2024-04-18 17:07:10.948525] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.372 17:07:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.372 17:07:10 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:55.372 17:07:10 -- host/digest.sh@54 -- # local rw bs qd 00:19:55.372 17:07:10 -- host/digest.sh@56 -- # rw=randread 00:19:55.372 17:07:10 -- host/digest.sh@56 -- # bs=4096 00:19:55.372 17:07:10 -- host/digest.sh@56 -- # qd=128 00:19:55.372 17:07:10 -- host/digest.sh@58 -- # bperfpid=1757529 00:19:55.372 17:07:10 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:55.372 17:07:10 -- host/digest.sh@60 -- # waitforlisten 1757529 /var/tmp/bperf.sock 00:19:55.372 17:07:10 -- common/autotest_common.sh@817 -- # '[' -z 1757529 ']' 00:19:55.372 17:07:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:55.372 17:07:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:55.372 17:07:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:55.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:55.372 17:07:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:55.372 17:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:55.372 [2024-04-18 17:07:10.992755] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:55.372 [2024-04-18 17:07:10.992825] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1757529 ] 00:19:55.372 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.372 [2024-04-18 17:07:11.051918] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.631 [2024-04-18 17:07:11.162462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.631 17:07:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:55.631 17:07:11 -- common/autotest_common.sh@850 -- # return 0 00:19:55.631 17:07:11 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:55.631 17:07:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:55.889 17:07:11 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:55.889 17:07:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.889 17:07:11 -- common/autotest_common.sh@10 -- # set +x 00:19:55.889 17:07:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.889 17:07:11 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:55.889 17:07:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:56.456 nvme0n1 00:19:56.456 17:07:12 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:56.456 17:07:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.456 17:07:12 -- common/autotest_common.sh@10 -- # set +x 00:19:56.456 17:07:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.456 17:07:12 -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:56.456 17:07:12 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:56.456 Running I/O for 2 seconds... 00:19:56.456 [2024-04-18 17:07:12.137115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.456 [2024-04-18 17:07:12.137164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.456 [2024-04-18 17:07:12.137185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.456 [2024-04-18 17:07:12.154137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.456 [2024-04-18 17:07:12.154174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.456 [2024-04-18 17:07:12.154194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.714 [2024-04-18 17:07:12.167878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.714 [2024-04-18 17:07:12.167914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.714 [2024-04-18 17:07:12.167933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.714 [2024-04-18 17:07:12.182621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.714 [2024-04-18 17:07:12.182670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.714 [2024-04-18 17:07:12.182690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.714 [2024-04-18 17:07:12.197023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.714 [2024-04-18 17:07:12.197069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.714 [2024-04-18 17:07:12.197085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.714 [2024-04-18 17:07:12.213276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.714 [2024-04-18 17:07:12.213308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.714 [2024-04-18 17:07:12.213324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.714 [2024-04-18 17:07:12.225468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.714 [2024-04-18 17:07:12.225496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.714 [2024-04-18 17:07:12.225527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.714 [2024-04-18 17:07:12.240034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.714 [2024-04-18 17:07:12.240064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.714 [2024-04-18 17:07:12.240094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.714 [2024-04-18 17:07:12.254300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.714 [2024-04-18 17:07:12.254330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.714 [2024-04-18 17:07:12.254361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.714 [2024-04-18 17:07:12.266427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.714 [2024-04-18 17:07:12.266472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.714 [2024-04-18 17:07:12.266490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.714 [2024-04-18 17:07:12.282987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.714 [2024-04-18 17:07:12.283022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.714 [2024-04-18 17:07:12.283041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.714 [2024-04-18 17:07:12.299556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.714 [2024-04-18 17:07:12.299585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.714 [2024-04-18 17:07:12.299616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.714 [2024-04-18 17:07:12.315628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.714 [2024-04-18 17:07:12.315659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.714 [2024-04-18 17:07:12.315692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.714 [2024-04-18 17:07:12.327822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.714 [2024-04-18 17:07:12.327856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.715 [2024-04-18 17:07:12.327875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.715 [2024-04-18 17:07:12.343673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.715 [2024-04-18 17:07:12.343708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.715 [2024-04-18 17:07:12.343727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.715 [2024-04-18 17:07:12.356547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.715 [2024-04-18 17:07:12.356582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.715 [2024-04-18 17:07:12.356601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.715 [2024-04-18 17:07:12.372673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.715 [2024-04-18 17:07:12.372702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.715 [2024-04-18 17:07:12.372739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.715 [2024-04-18 17:07:12.386745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.715 [2024-04-18 17:07:12.386775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.715 [2024-04-18 17:07:12.386792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.715 [2024-04-18 17:07:12.399855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.715 [2024-04-18 17:07:12.399890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.715 [2024-04-18 17:07:12.399909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.715 [2024-04-18 17:07:12.412595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.715 [2024-04-18 17:07:12.412625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.715 [2024-04-18 17:07:12.412658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.973 [2024-04-18 17:07:12.428747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.973 [2024-04-18 17:07:12.428781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.973 [2024-04-18 17:07:12.428800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.446281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.446326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.446346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.459402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.459434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.459454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.471657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.471689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.471709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.486002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.486031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.486062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.497770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.497811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.497830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.512628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.512659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.512690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.529256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.529291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.529310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.546634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.546663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.546700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.558001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.558035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.558054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.574747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.574780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.574798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.590570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.590601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.590617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.601477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.601507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.601538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.616746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.616776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.616792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.632907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.632941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.632960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.643865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.643895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.643927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.660417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.660464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.660479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.974 [2024-04-18 17:07:12.676320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:56.974 [2024-04-18 17:07:12.676355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.974 [2024-04-18 17:07:12.676373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.232 [2024-04-18 17:07:12.693710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.232 [2024-04-18 17:07:12.693744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.232 [2024-04-18 17:07:12.693764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.232 [2024-04-18 17:07:12.707873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.232 [2024-04-18 17:07:12.707909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.232 [2024-04-18 17:07:12.707927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.232 [2024-04-18 17:07:12.724045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.232 [2024-04-18 17:07:12.724079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.724111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.735990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.736025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.736044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.753454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.753488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.753521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.764242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.764292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.764311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.780622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.780651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.780681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.794864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.794894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.794912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.806819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.806849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.806865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.822507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.822538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.822557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.833141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.833168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.833198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.848066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.848095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.848126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.861501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.861530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.861547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.873978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.874008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.874025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.887366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.887408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.887428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.904047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.904082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.904100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.917253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.917284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.917300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.233 [2024-04-18 17:07:12.929086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.233 [2024-04-18 17:07:12.929120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.233 [2024-04-18 17:07:12.929139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.491 [2024-04-18 17:07:12.943261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.491 [2024-04-18 17:07:12.943290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.491 [2024-04-18 17:07:12.943321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.491 [2024-04-18 17:07:12.957256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.491 [2024-04-18 17:07:12.957286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.491 [2024-04-18 17:07:12.957303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.491 [2024-04-18 17:07:12.969685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.491 [2024-04-18 17:07:12.969719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.491 [2024-04-18 17:07:12.969738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.491 [2024-04-18 17:07:12.983356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.491 [2024-04-18 17:07:12.983411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.491 [2024-04-18 17:07:12.983438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.491 [2024-04-18 17:07:12.996313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.491 [2024-04-18 17:07:12.996363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.491 [2024-04-18 17:07:12.996392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.491 [2024-04-18 17:07:13.013888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.491 [2024-04-18 17:07:13.013935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.491 [2024-04-18 17:07:13.013951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.491 [2024-04-18 17:07:13.025699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.491 [2024-04-18 17:07:13.025735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.492 [2024-04-18 17:07:13.025754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.492 [2024-04-18 17:07:13.042287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.492 [2024-04-18 17:07:13.042319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.492 [2024-04-18 17:07:13.042335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.492 [2024-04-18 17:07:13.058739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.492 [2024-04-18 17:07:13.058768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.492 [2024-04-18 17:07:13.058800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.492 [2024-04-18 17:07:13.070466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.492 [2024-04-18 17:07:13.070494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.492 [2024-04-18 17:07:13.070510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.492 [2024-04-18 17:07:13.087561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.492 [2024-04-18 17:07:13.087590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.492 [2024-04-18 17:07:13.087620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.492 [2024-04-18 17:07:13.101088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.492 [2024-04-18 17:07:13.101118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.492 [2024-04-18 17:07:13.101136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.492 [2024-04-18 17:07:13.113372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.492 [2024-04-18 17:07:13.113415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.492 [2024-04-18 17:07:13.113437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.492 [2024-04-18 17:07:13.128916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.492 [2024-04-18 17:07:13.128946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.492 [2024-04-18 17:07:13.128963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.492 [2024-04-18 17:07:13.143548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.492 [2024-04-18 17:07:13.143577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.492 [2024-04-18 17:07:13.143608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.492 [2024-04-18 17:07:13.154646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.492 [2024-04-18 17:07:13.154694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.492 [2024-04-18 17:07:13.154709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.492 [2024-04-18 17:07:13.170034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.492 [2024-04-18 17:07:13.170081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.492 [2024-04-18 17:07:13.170100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.492 [2024-04-18 17:07:13.186708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.492 [2024-04-18 17:07:13.186753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.492 [2024-04-18 17:07:13.186772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.202024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.749 [2024-04-18 17:07:13.202059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.749 [2024-04-18 17:07:13.202078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.214120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.749 [2024-04-18 17:07:13.214153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.749 [2024-04-18 17:07:13.214171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.228009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.749 [2024-04-18 17:07:13.228043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.749 [2024-04-18 17:07:13.228062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.245271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.749 [2024-04-18 17:07:13.245305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.749 [2024-04-18 17:07:13.245323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.262133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.749 [2024-04-18 17:07:13.262168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.749 [2024-04-18 17:07:13.262187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.280297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.749 [2024-04-18 17:07:13.280331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.749 [2024-04-18 17:07:13.280350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.294251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.749 [2024-04-18 17:07:13.294286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.749 [2024-04-18 17:07:13.294305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.306726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.749 [2024-04-18 17:07:13.306761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.749 [2024-04-18 17:07:13.306779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.323233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.749 [2024-04-18 17:07:13.323268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.749 [2024-04-18 17:07:13.323287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.338194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.749 [2024-04-18 17:07:13.338229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.749 [2024-04-18 17:07:13.338248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.355707] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.749 [2024-04-18 17:07:13.355737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.749 [2024-04-18 17:07:13.355770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.371039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.749 [2024-04-18 17:07:13.371087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.749 [2024-04-18 17:07:13.371109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.749 [2024-04-18 17:07:13.382861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.750 [2024-04-18 17:07:13.382890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.750 [2024-04-18 17:07:13.382922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.750 [2024-04-18 17:07:13.399195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.750 [2024-04-18 17:07:13.399230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.750 [2024-04-18 17:07:13.399248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.750 [2024-04-18 17:07:13.410137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.750 [2024-04-18 17:07:13.410170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.750 [2024-04-18 17:07:13.410188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.750 [2024-04-18 17:07:13.426153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.750 [2024-04-18 17:07:13.426183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.750 [2024-04-18 17:07:13.426198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.750 [2024-04-18 17:07:13.440974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.750 [2024-04-18 17:07:13.441003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.750 [2024-04-18 17:07:13.441020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.750 [2024-04-18 17:07:13.452683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:57.750 [2024-04-18 17:07:13.452732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.750 [2024-04-18 17:07:13.452750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.469811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.469841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.007 [2024-04-18 17:07:13.469857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.485286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.485315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.007 [2024-04-18 17:07:13.485346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.497082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.497128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.007 [2024-04-18 17:07:13.497144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.514461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.514489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.007 [2024-04-18 17:07:13.514519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.525547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.525575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.007 [2024-04-18 17:07:13.525607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.541344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.541374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.007 [2024-04-18 17:07:13.541398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.559558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.559587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.007 [2024-04-18 17:07:13.559619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.570428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.570456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.007 [2024-04-18 17:07:13.570472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.586374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.586414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.007 [2024-04-18 17:07:13.586433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.601942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.601972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.007 [2024-04-18 17:07:13.601988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.613600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.613642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.007 [2024-04-18 17:07:13.613663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.628039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.628073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.007 [2024-04-18 17:07:13.628092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.007 [2024-04-18 17:07:13.645023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.007 [2024-04-18 17:07:13.645057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.008 [2024-04-18 17:07:13.645075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.008 [2024-04-18 17:07:13.662330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.008 [2024-04-18 17:07:13.662360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.008 [2024-04-18 17:07:13.662376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.008 [2024-04-18 17:07:13.677399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.008 [2024-04-18 17:07:13.677428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.008 [2024-04-18 17:07:13.677458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.008 [2024-04-18 17:07:13.689196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.008 [2024-04-18 17:07:13.689230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.008 [2024-04-18 17:07:13.689249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.008 [2024-04-18 17:07:13.706547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.008 [2024-04-18 17:07:13.706579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.008 [2024-04-18 17:07:13.706596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.720296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.720327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.720343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.732634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.732682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.732700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.750339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.750404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.750425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.767525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.767555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.767587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.778942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.778973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.778991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.792267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.792296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.792311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.808131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.808159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.808190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.824840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.824871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.824888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.839224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.839258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.839276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.854339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.854390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.854408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.864822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.864849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.864879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.879952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.879983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.880000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.894431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.894462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.894483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.906209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.906240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.906257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.921247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.921275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.921307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.932513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.932541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.932572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.947598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.947629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.947646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.267 [2024-04-18 17:07:13.963447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.267 [2024-04-18 17:07:13.963478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.267 [2024-04-18 17:07:13.963495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 [2024-04-18 17:07:13.974847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.526 [2024-04-18 17:07:13.974877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.526 [2024-04-18 17:07:13.974894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 [2024-04-18 17:07:13.988760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.526 [2024-04-18 17:07:13.988789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.526 [2024-04-18 17:07:13.988812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 [2024-04-18 17:07:14.004503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.526 [2024-04-18 17:07:14.004544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.526 [2024-04-18 17:07:14.004576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 [2024-04-18 17:07:14.016963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.526 [2024-04-18 17:07:14.016995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.526 [2024-04-18 17:07:14.017017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 [2024-04-18 17:07:14.028458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.526 [2024-04-18 17:07:14.028502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.526 [2024-04-18 17:07:14.028518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 [2024-04-18 17:07:14.042970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.526 [2024-04-18 17:07:14.043001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.526 [2024-04-18 17:07:14.043018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 [2024-04-18 17:07:14.055155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.526 [2024-04-18 17:07:14.055199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.526 [2024-04-18 17:07:14.055216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 [2024-04-18 17:07:14.067085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.526 [2024-04-18 17:07:14.067113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.526 [2024-04-18 17:07:14.067144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 [2024-04-18 17:07:14.080340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.526 [2024-04-18 17:07:14.080369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.526 [2024-04-18 17:07:14.080407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 [2024-04-18 17:07:14.094230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.526 [2024-04-18 17:07:14.094260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.526 [2024-04-18 17:07:14.094277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 [2024-04-18 17:07:14.108863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.526 [2024-04-18 17:07:14.108899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.526 [2024-04-18 17:07:14.108936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 [2024-04-18 17:07:14.120349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fb9c0) 00:19:58.526 [2024-04-18 17:07:14.120388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.526 [2024-04-18 17:07:14.120408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.526 00:19:58.526 Latency(us) 00:19:58.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.526 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:58.526 nvme0n1 : 2.00 17728.68 69.25 0.00 0.00 7210.44 3106.89 25437.68 00:19:58.526 =================================================================================================================== 00:19:58.526 Total : 17728.68 69.25 0.00 0.00 7210.44 3106.89 25437.68 00:19:58.526 0 00:19:58.526 17:07:14 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:58.526 17:07:14 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:58.526 17:07:14 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:58.526 | .driver_specific 00:19:58.526 | .nvme_error 00:19:58.526 | .status_code 00:19:58.526 | .command_transient_transport_error' 00:19:58.526 17:07:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:58.784 17:07:14 -- host/digest.sh@71 -- # (( 139 > 0 )) 00:19:58.784 17:07:14 -- host/digest.sh@73 -- # killprocess 1757529 00:19:58.784 17:07:14 -- common/autotest_common.sh@936 -- # '[' -z 1757529 ']' 00:19:58.784 17:07:14 -- common/autotest_common.sh@940 -- # kill -0 1757529 00:19:58.784 17:07:14 -- common/autotest_common.sh@941 -- # uname 00:19:58.784 17:07:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:58.784 17:07:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1757529 00:19:58.784 17:07:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:58.784 17:07:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:58.784 17:07:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1757529' 00:19:58.784 killing process with pid 1757529 00:19:58.784 17:07:14 -- common/autotest_common.sh@955 -- # kill 1757529 00:19:58.784 Received shutdown signal, test time was about 2.000000 seconds 00:19:58.784 00:19:58.784 Latency(us) 00:19:58.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.784 =================================================================================================================== 00:19:58.784 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.784 17:07:14 -- common/autotest_common.sh@960 -- # wait 1757529 00:19:59.042 17:07:14 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:59.042 17:07:14 -- host/digest.sh@54 -- # local rw bs qd 00:19:59.042 17:07:14 -- host/digest.sh@56 -- # rw=randread 00:19:59.042 17:07:14 -- host/digest.sh@56 -- # bs=131072 00:19:59.042 17:07:14 -- host/digest.sh@56 -- # qd=16 00:19:59.042 17:07:14 -- host/digest.sh@58 -- # bperfpid=1757939 00:19:59.042 17:07:14 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:59.042 17:07:14 -- host/digest.sh@60 -- # waitforlisten 1757939 /var/tmp/bperf.sock 00:19:59.042 17:07:14 -- common/autotest_common.sh@817 -- # '[' -z 1757939 ']' 00:19:59.042 17:07:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:59.042 17:07:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:59.042 17:07:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:59.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:59.042 17:07:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:59.042 17:07:14 -- common/autotest_common.sh@10 -- # set +x 00:19:59.042 [2024-04-18 17:07:14.728269] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:59.042 [2024-04-18 17:07:14.728348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1757939 ] 00:19:59.042 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:59.042 Zero copy mechanism will not be used. 00:19:59.300 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.300 [2024-04-18 17:07:14.788951] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.300 [2024-04-18 17:07:14.900623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.558 17:07:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:59.558 17:07:15 -- common/autotest_common.sh@850 -- # return 0 00:19:59.558 17:07:15 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:59.558 17:07:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:59.816 17:07:15 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:59.816 17:07:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.816 17:07:15 -- common/autotest_common.sh@10 -- # set +x 00:19:59.816 17:07:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.816 17:07:15 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:59.816 17:07:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:00.074 nvme0n1 00:20:00.074 17:07:15 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:00.074 17:07:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.074 17:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:00.074 17:07:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.074 17:07:15 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:00.074 17:07:15 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:00.334 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:00.334 Zero copy mechanism will not be used. 00:20:00.334 Running I/O for 2 seconds... 00:20:00.334 [2024-04-18 17:07:15.853963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.334 [2024-04-18 17:07:15.854034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.334 [2024-04-18 17:07:15.854056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.334 [2024-04-18 17:07:15.860815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.334 [2024-04-18 17:07:15.860853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.334 [2024-04-18 17:07:15.860889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.334 [2024-04-18 17:07:15.867859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.334 [2024-04-18 17:07:15.867886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.334 [2024-04-18 17:07:15.867929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.334 [2024-04-18 17:07:15.874845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.334 [2024-04-18 17:07:15.874873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.334 [2024-04-18 17:07:15.874889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.334 [2024-04-18 17:07:15.881834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.334 [2024-04-18 17:07:15.881862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.334 [2024-04-18 17:07:15.881878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.334 [2024-04-18 17:07:15.888881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.334 [2024-04-18 17:07:15.888923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.334 [2024-04-18 17:07:15.888940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.334 [2024-04-18 17:07:15.895810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.334 [2024-04-18 17:07:15.895838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.334 [2024-04-18 17:07:15.895854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.334 [2024-04-18 17:07:15.902742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.334 [2024-04-18 17:07:15.902772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.334 [2024-04-18 17:07:15.902788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.334 [2024-04-18 17:07:15.909115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.334 [2024-04-18 17:07:15.909145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.334 [2024-04-18 17:07:15.909162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.914879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.914909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.914926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.921169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.921199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.921216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.928147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.928181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.928199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.935014] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.935043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.935059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.941982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.942011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.942028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.948981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.949009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.949041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.955979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.956009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.956026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.962992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.963022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.963039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.969395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.969424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.969440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.976419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.976449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.976465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.983211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.983240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.983256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.990153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.990182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.990198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:15.997104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:15.997133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:15.997149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:16.004027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:16.004057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:16.004074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:16.008649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:16.008678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:16.008695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:16.013915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:16.013946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:16.013963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:16.020613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:16.020652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:16.020683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:16.027592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:16.027635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:16.027651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.335 [2024-04-18 17:07:16.034155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.335 [2024-04-18 17:07:16.034184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.335 [2024-04-18 17:07:16.034200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.597 [2024-04-18 17:07:16.041062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.041108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.041130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.047409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.047445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.047462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.054392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.054422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.054438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.061300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.061330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.061347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.069807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.069837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.069854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.078554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.078584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.078601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.086795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.086826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.086842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.093941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.093971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.093987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.101055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.101083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.101100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.108011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.108046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.108063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.114757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.114786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.114802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.121563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.121592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.121608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.128426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.128455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.128471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.135330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.135359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.135376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.142059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.142088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.142104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.148865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.148894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.148910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.155785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.155814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.155845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.162687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.162715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.162732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.169542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.169572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.169587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.176497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.176526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.176542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.183409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.183439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.183455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.190343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.190372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.190396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.197291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.197319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.197335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.204145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.204174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.204190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.211080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.211109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.211126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.217955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.217983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.217999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.598 [2024-04-18 17:07:16.224848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.598 [2024-04-18 17:07:16.224883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.598 [2024-04-18 17:07:16.224900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.599 [2024-04-18 17:07:16.231742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.599 [2024-04-18 17:07:16.231770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.599 [2024-04-18 17:07:16.231802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.599 [2024-04-18 17:07:16.238584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.599 [2024-04-18 17:07:16.238612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.599 [2024-04-18 17:07:16.238628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.599 [2024-04-18 17:07:16.245545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.599 [2024-04-18 17:07:16.245573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.599 [2024-04-18 17:07:16.245589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.599 [2024-04-18 17:07:16.252331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.599 [2024-04-18 17:07:16.252359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.599 [2024-04-18 17:07:16.252375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.599 [2024-04-18 17:07:16.259252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.599 [2024-04-18 17:07:16.259280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.599 [2024-04-18 17:07:16.259297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.599 [2024-04-18 17:07:16.266290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.599 [2024-04-18 17:07:16.266318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.599 [2024-04-18 17:07:16.266335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.599 [2024-04-18 17:07:16.273155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.599 [2024-04-18 17:07:16.273183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.599 [2024-04-18 17:07:16.273200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.599 [2024-04-18 17:07:16.280063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.599 [2024-04-18 17:07:16.280092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.599 [2024-04-18 17:07:16.280107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.599 [2024-04-18 17:07:16.286836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.599 [2024-04-18 17:07:16.286865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.599 [2024-04-18 17:07:16.286881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.599 [2024-04-18 17:07:16.293679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.599 [2024-04-18 17:07:16.293707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.599 [2024-04-18 17:07:16.293724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.599 [2024-04-18 17:07:16.300709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.599 [2024-04-18 17:07:16.300738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.599 [2024-04-18 17:07:16.300754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.860 [2024-04-18 17:07:16.307350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.307388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.307407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.314105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.314134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.314150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.321857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.321888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.321905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.331209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.331240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.331257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.339616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.339646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.339663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.347402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.347432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.347456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.354333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.354362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.354378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.361244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.361273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.361289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.368003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.368032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.368048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.374837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.374881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.374897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.382563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.382593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.382610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.390951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.390982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.390998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.399014] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.399045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.399062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.407172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.407204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.407221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.415468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.415505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.415522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.424014] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.424046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.424063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.432066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.432097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.432113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.439426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.439457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.439474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.447205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.447235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.447252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.456267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.456298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.456315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.464836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.464866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.464882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.473199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.473241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.473258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.481851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.481882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.481899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.492002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.492034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.492051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.501233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.501264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.501281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.508703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.861 [2024-04-18 17:07:16.508750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.861 [2024-04-18 17:07:16.508767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.861 [2024-04-18 17:07:16.514001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.862 [2024-04-18 17:07:16.514044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.862 [2024-04-18 17:07:16.514059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.862 [2024-04-18 17:07:16.522792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.862 [2024-04-18 17:07:16.522822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.862 [2024-04-18 17:07:16.522838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.862 [2024-04-18 17:07:16.531534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.862 [2024-04-18 17:07:16.531565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.862 [2024-04-18 17:07:16.531597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.862 [2024-04-18 17:07:16.540662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.862 [2024-04-18 17:07:16.540708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.862 [2024-04-18 17:07:16.540727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.862 [2024-04-18 17:07:16.549853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.862 [2024-04-18 17:07:16.549888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.862 [2024-04-18 17:07:16.549907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.862 [2024-04-18 17:07:16.557736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:00.862 [2024-04-18 17:07:16.557770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.862 [2024-04-18 17:07:16.557795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.123 [2024-04-18 17:07:16.566607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.123 [2024-04-18 17:07:16.566638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.123 [2024-04-18 17:07:16.566655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.123 [2024-04-18 17:07:16.575097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.123 [2024-04-18 17:07:16.575131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.123 [2024-04-18 17:07:16.575150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.123 [2024-04-18 17:07:16.585399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.123 [2024-04-18 17:07:16.585445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.123 [2024-04-18 17:07:16.585462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.123 [2024-04-18 17:07:16.594905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.123 [2024-04-18 17:07:16.594939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.123 [2024-04-18 17:07:16.594958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.123 [2024-04-18 17:07:16.603806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.123 [2024-04-18 17:07:16.603839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.123 [2024-04-18 17:07:16.603858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.123 [2024-04-18 17:07:16.611921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.123 [2024-04-18 17:07:16.611955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.123 [2024-04-18 17:07:16.611974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.123 [2024-04-18 17:07:16.619971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.123 [2024-04-18 17:07:16.620004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.123 [2024-04-18 17:07:16.620023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.123 [2024-04-18 17:07:16.627652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.123 [2024-04-18 17:07:16.627695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.123 [2024-04-18 17:07:16.627710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.635171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.635210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.635229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.642745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.642779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.642797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.650066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.650098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.650116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.657590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.657632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.657648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.665455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.665484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.665500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.672953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.672986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.673004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.680540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.680581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.680597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.688149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.688181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.688200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.695569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.695596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.695617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.702841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.702873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.702892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.710244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.710276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.710295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.717702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.717734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.717752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.725391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.725437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.725454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.732849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.732881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.732899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.740518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.740546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.740562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.748397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.748443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.748458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.755940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.755973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.755991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.763395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.763446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.763463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.770942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.770974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.770992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.778371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.778413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.778432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.785856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.785889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.785907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.793289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.793321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.793339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.800737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.800769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.800787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.808212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.808244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.808262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.815649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.815692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.815707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.124 [2024-04-18 17:07:16.823531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.124 [2024-04-18 17:07:16.823560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.124 [2024-04-18 17:07:16.823590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.831254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.831288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.831305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.838667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.838696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.838728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.846151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.846183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.846201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.853560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.853589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.853605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.860993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.861025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.861043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.868289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.868321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.868339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.875743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.875774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.875793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.883166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.883198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.883215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.890651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.890679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.890702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.898005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.898037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.898054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.905296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.905329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.905347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.912743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.912775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.912793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.920350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.920391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.920411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.927764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.927796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.927813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.935166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.935198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.935215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.942729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.942761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.942780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.950128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.950159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.950177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.957478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.957512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.957529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.964884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.964916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.964934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.972343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.972376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.972404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.979743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.979776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.979794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.987499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.987527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.987556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:16.995033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:16.995064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:16.995083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:17.002547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:17.002589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:17.002605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:17.010007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:17.010040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:17.010058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:17.017409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:17.017452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:17.017467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:17.024988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:17.025022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:17.025040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:17.032482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:17.032511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:17.032528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:17.039949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:17.039981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:17.039998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:17.047550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:17.047580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:17.047596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:17.054988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:17.055021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:17.055039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:17.062537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:17.062579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:17.062594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:17.070155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:17.070187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:17.070205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:17.077712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:17.077758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:17.077776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.386 [2024-04-18 17:07:17.085437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.386 [2024-04-18 17:07:17.085467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.386 [2024-04-18 17:07:17.085491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.092945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.092978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.092995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.100407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.100452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.100468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.107785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.107819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.107837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.115155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.115188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.115206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.122555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.122583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.122614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.130061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.130093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.130111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.137454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.137483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.137499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.145190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.145224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.145242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.152584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.152612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.152628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.159933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.159966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.159984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.167376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.167431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.167448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.174782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.174814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.174832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.182176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.182207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.182225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.189525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.189554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.189570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.648 [2024-04-18 17:07:17.196988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.648 [2024-04-18 17:07:17.197020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.648 [2024-04-18 17:07:17.197038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.204332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.204364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.204389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.211776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.211808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.211832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.219489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.219519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.219535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.227424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.227453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.227469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.234835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.234867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.234885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.242192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.242223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.242242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.249564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.249591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.249607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.257279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.257313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.257332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.264715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.264748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.264766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.272154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.272187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.272205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.279515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.279548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.279580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.287547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.287575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.287591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.294932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.294964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.294982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.302305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.302337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.302355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.309826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.309858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.309876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.317196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.317228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.317245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.324822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.324855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.324874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.332181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.332212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.332230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.339494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.339522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.339538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.649 [2024-04-18 17:07:17.346938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.649 [2024-04-18 17:07:17.346970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.649 [2024-04-18 17:07:17.346987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.909 [2024-04-18 17:07:17.354327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.909 [2024-04-18 17:07:17.354360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.909 [2024-04-18 17:07:17.354378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.909 [2024-04-18 17:07:17.361674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.909 [2024-04-18 17:07:17.361704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.909 [2024-04-18 17:07:17.361738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.909 [2024-04-18 17:07:17.369149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.909 [2024-04-18 17:07:17.369180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.369198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.376470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.376498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.376514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.383980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.384011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.384028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.391576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.391605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.391621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.398904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.398936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.398953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.406362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.406408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.406450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.413834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.413865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.413883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.421200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.421231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.421249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.428519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.428547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.428564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.435854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.435885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.435902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.443197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.443228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.443246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.450580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.450607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.450622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.457874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.457907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.457925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.465328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.465359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.465377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.472832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.472869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.472888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.480343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.480375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.480402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.487807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.487839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.487857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.495181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.495212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.495229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.502519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.502546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.502562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.509935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.509968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.509986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.517372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.517426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.517443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.524790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.524821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.524839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.532513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.532556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.532571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.539168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.539199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.539215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.545918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.545963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.545980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.552947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.552977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.552993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.559851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.910 [2024-04-18 17:07:17.559907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.910 [2024-04-18 17:07:17.559923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.910 [2024-04-18 17:07:17.566340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.911 [2024-04-18 17:07:17.566370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.911 [2024-04-18 17:07:17.566397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.911 [2024-04-18 17:07:17.573431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.911 [2024-04-18 17:07:17.573465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.911 [2024-04-18 17:07:17.573481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.911 [2024-04-18 17:07:17.580261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.911 [2024-04-18 17:07:17.580289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.911 [2024-04-18 17:07:17.580305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.911 [2024-04-18 17:07:17.587119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.911 [2024-04-18 17:07:17.587148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.911 [2024-04-18 17:07:17.587164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.911 [2024-04-18 17:07:17.594119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.911 [2024-04-18 17:07:17.594149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.911 [2024-04-18 17:07:17.594171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.911 [2024-04-18 17:07:17.601106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.911 [2024-04-18 17:07:17.601134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.911 [2024-04-18 17:07:17.601153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.911 [2024-04-18 17:07:17.607960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.911 [2024-04-18 17:07:17.607990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.911 [2024-04-18 17:07:17.608006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.911 [2024-04-18 17:07:17.614745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:01.911 [2024-04-18 17:07:17.614774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.170 [2024-04-18 17:07:17.614790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.170 [2024-04-18 17:07:17.622561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.170 [2024-04-18 17:07:17.622592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.170 [2024-04-18 17:07:17.622609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.170 [2024-04-18 17:07:17.630105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.170 [2024-04-18 17:07:17.630135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.170 [2024-04-18 17:07:17.630152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.170 [2024-04-18 17:07:17.637234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.170 [2024-04-18 17:07:17.637263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.170 [2024-04-18 17:07:17.637280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.170 [2024-04-18 17:07:17.644053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.170 [2024-04-18 17:07:17.644082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.170 [2024-04-18 17:07:17.644098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.170 [2024-04-18 17:07:17.651219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.170 [2024-04-18 17:07:17.651248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.170 [2024-04-18 17:07:17.651265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.170 [2024-04-18 17:07:17.658486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.170 [2024-04-18 17:07:17.658516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.170 [2024-04-18 17:07:17.658532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.170 [2024-04-18 17:07:17.665418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.170 [2024-04-18 17:07:17.665447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.170 [2024-04-18 17:07:17.665464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.170 [2024-04-18 17:07:17.672935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.170 [2024-04-18 17:07:17.672973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.170 [2024-04-18 17:07:17.672990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.170 [2024-04-18 17:07:17.680629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.680660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.680683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.688665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.688711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.688727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.695961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.695992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.696009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.702826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.702855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.702872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.709905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.709950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.709966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.716988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.717017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.717040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.723900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.723929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.723946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.730816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.730845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.730861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.737658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.737688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.737704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.744547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.744576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.744592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.751616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.751646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.751663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.758525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.758556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.758572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.765365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.765403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.765420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.772265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.772294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.772310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.776734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.776769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.776786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.781842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.781872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.781887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.788319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.788349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.788365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.795149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.795178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.795194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.801905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.801934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.801950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.808701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.808731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.808747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.815466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.815494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.815525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.822273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.822316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.822331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.829018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.829047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.829063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.835816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.835859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.171 [2024-04-18 17:07:17.835874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.171 [2024-04-18 17:07:17.842685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.171 [2024-04-18 17:07:17.842727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.172 [2024-04-18 17:07:17.842743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.172 [2024-04-18 17:07:17.849392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d34d0) 00:20:02.172 [2024-04-18 17:07:17.849420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.172 [2024-04-18 17:07:17.849437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.172 00:20:02.172 Latency(us) 00:20:02.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.172 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:02.172 nvme0n1 : 2.00 4228.02 528.50 0.00 0.00 3779.38 867.75 10194.49 00:20:02.172 =================================================================================================================== 00:20:02.172 Total : 4228.02 528.50 0.00 0.00 3779.38 867.75 10194.49 00:20:02.172 0 00:20:02.172 17:07:17 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:02.172 17:07:17 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:02.172 17:07:17 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:02.172 | .driver_specific 00:20:02.172 | .nvme_error 00:20:02.172 | .status_code 00:20:02.172 | .command_transient_transport_error' 00:20:02.172 17:07:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:02.430 17:07:18 -- host/digest.sh@71 -- # (( 273 > 0 )) 00:20:02.430 17:07:18 -- host/digest.sh@73 -- # killprocess 1757939 00:20:02.430 17:07:18 -- common/autotest_common.sh@936 -- # '[' -z 1757939 ']' 00:20:02.430 17:07:18 -- common/autotest_common.sh@940 -- # kill -0 1757939 00:20:02.430 17:07:18 -- common/autotest_common.sh@941 -- # uname 00:20:02.430 17:07:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:02.430 17:07:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1757939 00:20:02.690 17:07:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:02.690 17:07:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:02.690 17:07:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1757939' 00:20:02.690 killing process with pid 1757939 00:20:02.690 17:07:18 -- common/autotest_common.sh@955 -- # kill 1757939 00:20:02.690 Received shutdown signal, test time was about 2.000000 seconds 00:20:02.690 00:20:02.690 Latency(us) 00:20:02.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.690 =================================================================================================================== 00:20:02.690 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.690 17:07:18 -- common/autotest_common.sh@960 -- # wait 1757939 00:20:02.950 17:07:18 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:02.951 17:07:18 -- host/digest.sh@54 -- # local rw bs qd 00:20:02.951 17:07:18 -- host/digest.sh@56 -- # rw=randwrite 00:20:02.951 17:07:18 -- host/digest.sh@56 -- # bs=4096 00:20:02.951 17:07:18 -- host/digest.sh@56 -- # qd=128 00:20:02.951 17:07:18 -- host/digest.sh@58 -- # bperfpid=1758348 00:20:02.951 17:07:18 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:02.951 17:07:18 -- host/digest.sh@60 -- # waitforlisten 1758348 /var/tmp/bperf.sock 00:20:02.951 17:07:18 -- common/autotest_common.sh@817 -- # '[' -z 1758348 ']' 00:20:02.951 17:07:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:02.951 17:07:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:02.951 17:07:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:02.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:02.951 17:07:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:02.951 17:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:02.951 [2024-04-18 17:07:18.456242] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:02.951 [2024-04-18 17:07:18.456319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1758348 ] 00:20:02.951 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.951 [2024-04-18 17:07:18.517093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.951 [2024-04-18 17:07:18.629971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.209 17:07:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:03.209 17:07:18 -- common/autotest_common.sh@850 -- # return 0 00:20:03.209 17:07:18 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:03.209 17:07:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:03.467 17:07:19 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:03.467 17:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.467 17:07:19 -- common/autotest_common.sh@10 -- # set +x 00:20:03.467 17:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.467 17:07:19 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:03.467 17:07:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:04.037 nvme0n1 00:20:04.037 17:07:19 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:04.037 17:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.037 17:07:19 -- common/autotest_common.sh@10 -- # set +x 00:20:04.037 17:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.037 17:07:19 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:04.037 17:07:19 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:04.037 Running I/O for 2 seconds... 00:20:04.037 [2024-04-18 17:07:19.628060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ee5c8 00:20:04.037 [2024-04-18 17:07:19.629108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.037 [2024-04-18 17:07:19.629151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:04.037 [2024-04-18 17:07:19.640090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190eea00 00:20:04.037 [2024-04-18 17:07:19.641259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.037 [2024-04-18 17:07:19.641289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:04.037 [2024-04-18 17:07:19.652636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ecc78 00:20:04.037 [2024-04-18 17:07:19.653970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.038 [2024-04-18 17:07:19.654000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:04.038 [2024-04-18 17:07:19.665097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f9b30 00:20:04.038 [2024-04-18 17:07:19.666501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.038 [2024-04-18 17:07:19.666530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:04.038 [2024-04-18 17:07:19.677645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e49b0 00:20:04.038 [2024-04-18 17:07:19.679293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.038 [2024-04-18 17:07:19.679322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:04.038 [2024-04-18 17:07:19.690188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fcdd0 00:20:04.038 [2024-04-18 17:07:19.691947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.038 [2024-04-18 17:07:19.691976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:04.038 [2024-04-18 17:07:19.702699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ee5c8 00:20:04.038 [2024-04-18 17:07:19.704579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.038 [2024-04-18 17:07:19.704606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:04.038 [2024-04-18 17:07:19.711167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f3e60 00:20:04.038 [2024-04-18 17:07:19.712004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.038 [2024-04-18 17:07:19.712032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:04.038 [2024-04-18 17:07:19.723673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fdeb0 00:20:04.038 [2024-04-18 17:07:19.724683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.038 [2024-04-18 17:07:19.724712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:04.038 [2024-04-18 17:07:19.734946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e73e0 00:20:04.038 [2024-04-18 17:07:19.735965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.038 [2024-04-18 17:07:19.735993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:04.298 [2024-04-18 17:07:19.747777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f31b8 00:20:04.298 [2024-04-18 17:07:19.748969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.298 [2024-04-18 17:07:19.749006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:04.298 [2024-04-18 17:07:19.760291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190eea00 00:20:04.298 [2024-04-18 17:07:19.761620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.298 [2024-04-18 17:07:19.761649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:04.298 [2024-04-18 17:07:19.772465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f96f8 00:20:04.298 [2024-04-18 17:07:19.773818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.298 [2024-04-18 17:07:19.773846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:04.298 [2024-04-18 17:07:19.784242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f0350 00:20:04.298 [2024-04-18 17:07:19.785064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.298 [2024-04-18 17:07:19.785093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:04.298 [2024-04-18 17:07:19.796665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e5220 00:20:04.298 [2024-04-18 17:07:19.797659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.298 [2024-04-18 17:07:19.797687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:04.298 [2024-04-18 17:07:19.807879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f5be8 00:20:04.299 [2024-04-18 17:07:19.809556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.809585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.818053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190eaab8 00:20:04.299 [2024-04-18 17:07:19.818873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.818909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.830690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e4140 00:20:04.299 [2024-04-18 17:07:19.831624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.831652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.843198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f0350 00:20:04.299 [2024-04-18 17:07:19.844320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.844349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.855853] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f4f40 00:20:04.299 [2024-04-18 17:07:19.857149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.857184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.868368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fc128 00:20:04.299 [2024-04-18 17:07:19.869843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.869872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.880882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e4140 00:20:04.299 [2024-04-18 17:07:19.882590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.882618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.893353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ec840 00:20:04.299 [2024-04-18 17:07:19.895169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.895197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.905914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fe2e8 00:20:04.299 [2024-04-18 17:07:19.907913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.907941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.914503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ea248 00:20:04.299 [2024-04-18 17:07:19.915293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.915321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.925828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f8a50 00:20:04.299 [2024-04-18 17:07:19.926606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.926634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.938181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e1710 00:20:04.299 [2024-04-18 17:07:19.939143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.939172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.950651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fd208 00:20:04.299 [2024-04-18 17:07:19.951820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.951848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.963218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e7818 00:20:04.299 [2024-04-18 17:07:19.964501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.964537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.975664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fc560 00:20:04.299 [2024-04-18 17:07:19.977177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.977205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:19.988155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e1710 00:20:04.299 [2024-04-18 17:07:19.989714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:19.989742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:04.299 [2024-04-18 17:07:20.000658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f9b30 00:20:04.299 [2024-04-18 17:07:20.002726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.299 [2024-04-18 17:07:20.002754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:04.559 [2024-04-18 17:07:20.014081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fd640 00:20:04.559 [2024-04-18 17:07:20.016050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.559 [2024-04-18 17:07:20.016085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:04.559 [2024-04-18 17:07:20.022731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f92c0 00:20:04.559 [2024-04-18 17:07:20.023508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.023538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.034061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fac10 00:20:04.560 [2024-04-18 17:07:20.034963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.034995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.046633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e3498 00:20:04.560 [2024-04-18 17:07:20.047588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.047617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.059727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f0350 00:20:04.560 [2024-04-18 17:07:20.060856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.060887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.072430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f6458 00:20:04.560 [2024-04-18 17:07:20.073705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.073735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.085107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fc128 00:20:04.560 [2024-04-18 17:07:20.086612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.086652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.097879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e3498 00:20:04.560 [2024-04-18 17:07:20.099506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.099534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.110487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f7970 00:20:04.560 [2024-04-18 17:07:20.112308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.112337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.123034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fe2e8 00:20:04.560 [2024-04-18 17:07:20.124995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.125023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.131556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e8088 00:20:04.560 [2024-04-18 17:07:20.132390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.132417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.143992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190de038 00:20:04.560 [2024-04-18 17:07:20.144977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.145005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.156600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e1f80 00:20:04.560 [2024-04-18 17:07:20.157776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.157803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.167988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ed920 00:20:04.560 [2024-04-18 17:07:20.169016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.169050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.179523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190eaab8 00:20:04.560 [2024-04-18 17:07:20.180501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.180529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.192095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e73e0 00:20:04.560 [2024-04-18 17:07:20.193267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.193295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.204709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e01f8 00:20:04.560 [2024-04-18 17:07:20.206079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.206106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.216004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e9168 00:20:04.560 [2024-04-18 17:07:20.216780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.216809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.227913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f92c0 00:20:04.560 [2024-04-18 17:07:20.228667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.228695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.241415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ebb98 00:20:04.560 [2024-04-18 17:07:20.242921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.242949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:04.560 [2024-04-18 17:07:20.254085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190eaab8 00:20:04.560 [2024-04-18 17:07:20.255668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.560 [2024-04-18 17:07:20.255696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.266826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190df550 00:20:04.822 [2024-04-18 17:07:20.268626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.268656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.279398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e3498 00:20:04.822 [2024-04-18 17:07:20.281410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.281439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.287997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f1430 00:20:04.822 [2024-04-18 17:07:20.288826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.288854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.301824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fb480 00:20:04.822 [2024-04-18 17:07:20.302888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.302916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.313091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f1ca0 00:20:04.822 [2024-04-18 17:07:20.314792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.314820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.324309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e8088 00:20:04.822 [2024-04-18 17:07:20.325173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.325203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.337901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f8618 00:20:04.822 [2024-04-18 17:07:20.338942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.338970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.351407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ebb98 00:20:04.822 [2024-04-18 17:07:20.352604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.352631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.364903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190dfdc0 00:20:04.822 [2024-04-18 17:07:20.366287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.366318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.378624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ef270 00:20:04.822 [2024-04-18 17:07:20.380192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.380223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.392211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ea680 00:20:04.822 [2024-04-18 17:07:20.393948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.393980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.405860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f0bc0 00:20:04.822 [2024-04-18 17:07:20.407804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.407834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.419628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f35f0 00:20:04.822 [2024-04-18 17:07:20.421815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.421847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.428959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fac10 00:20:04.822 [2024-04-18 17:07:20.429803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.429835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.442709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f0bc0 00:20:04.822 [2024-04-18 17:07:20.443730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.443762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.822 [2024-04-18 17:07:20.456331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e0630 00:20:04.822 [2024-04-18 17:07:20.457582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.822 [2024-04-18 17:07:20.457610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:04.823 [2024-04-18 17:07:20.469887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ec840 00:20:04.823 [2024-04-18 17:07:20.471273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.823 [2024-04-18 17:07:20.471305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.823 [2024-04-18 17:07:20.483420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f6890 00:20:04.823 [2024-04-18 17:07:20.485025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.823 [2024-04-18 17:07:20.485056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:04.823 [2024-04-18 17:07:20.494468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ee5c8 00:20:04.823 [2024-04-18 17:07:20.495117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.823 [2024-04-18 17:07:20.495151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:04.823 [2024-04-18 17:07:20.508116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ebb98 00:20:04.823 [2024-04-18 17:07:20.509004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.823 [2024-04-18 17:07:20.509032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:04.823 [2024-04-18 17:07:20.521775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e8d30 00:20:04.823 [2024-04-18 17:07:20.522840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.823 [2024-04-18 17:07:20.522868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:05.083 [2024-04-18 17:07:20.534182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f1ca0 00:20:05.083 [2024-04-18 17:07:20.535965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.083 [2024-04-18 17:07:20.535993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:05.083 [2024-04-18 17:07:20.545497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ec840 00:20:05.083 [2024-04-18 17:07:20.546333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.083 [2024-04-18 17:07:20.546362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:05.083 [2024-04-18 17:07:20.559099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f8618 00:20:05.083 [2024-04-18 17:07:20.560130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.083 [2024-04-18 17:07:20.560162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:05.083 [2024-04-18 17:07:20.572691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f0bc0 00:20:05.083 [2024-04-18 17:07:20.573890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.083 [2024-04-18 17:07:20.573920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:05.083 [2024-04-18 17:07:20.586191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ea248 00:20:05.083 [2024-04-18 17:07:20.587652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.083 [2024-04-18 17:07:20.587678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:05.083 [2024-04-18 17:07:20.599826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ef270 00:20:05.083 [2024-04-18 17:07:20.601360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.083 [2024-04-18 17:07:20.601398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:05.083 [2024-04-18 17:07:20.613541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e95a0 00:20:05.083 [2024-04-18 17:07:20.615312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.083 [2024-04-18 17:07:20.615344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:05.083 [2024-04-18 17:07:20.627136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190df988 00:20:05.083 [2024-04-18 17:07:20.629023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.083 [2024-04-18 17:07:20.629054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:05.083 [2024-04-18 17:07:20.640639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f35f0 00:20:05.084 [2024-04-18 17:07:20.642739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.084 [2024-04-18 17:07:20.642766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:05.084 [2024-04-18 17:07:20.649866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f0350 00:20:05.084 [2024-04-18 17:07:20.650712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.084 [2024-04-18 17:07:20.650754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:05.084 [2024-04-18 17:07:20.663481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190df988 00:20:05.084 [2024-04-18 17:07:20.664557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.084 [2024-04-18 17:07:20.664586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:05.084 [2024-04-18 17:07:20.675810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e0630 00:20:05.084 [2024-04-18 17:07:20.676823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.084 [2024-04-18 17:07:20.676854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:05.084 [2024-04-18 17:07:20.689323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f5be8 00:20:05.084 [2024-04-18 17:07:20.690556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.084 [2024-04-18 17:07:20.690584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:05.084 [2024-04-18 17:07:20.702906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f0bc0 00:20:05.084 [2024-04-18 17:07:20.704251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.084 [2024-04-18 17:07:20.704282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:05.084 [2024-04-18 17:07:20.716456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e4578 00:20:05.084 [2024-04-18 17:07:20.718008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.084 [2024-04-18 17:07:20.718039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:05.084 [2024-04-18 17:07:20.730018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ea680 00:20:05.084 [2024-04-18 17:07:20.731759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.084 [2024-04-18 17:07:20.731791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:05.084 [2024-04-18 17:07:20.743559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f8e88 00:20:05.084 [2024-04-18 17:07:20.745468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.084 [2024-04-18 17:07:20.745495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:05.084 [2024-04-18 17:07:20.757160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f3a28 00:20:05.084 [2024-04-18 17:07:20.759234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.084 [2024-04-18 17:07:20.759265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:05.084 [2024-04-18 17:07:20.766405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f4f40 00:20:05.084 [2024-04-18 17:07:20.767250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.084 [2024-04-18 17:07:20.767280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:05.084 [2024-04-18 17:07:20.779895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f8e88 00:20:05.084 [2024-04-18 17:07:20.780926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.084 [2024-04-18 17:07:20.780957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.793509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f8618 00:20:05.343 [2024-04-18 17:07:20.794741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.794768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.807118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ec840 00:20:05.343 [2024-04-18 17:07:20.808550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.808577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.821879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ea680 00:20:05.343 [2024-04-18 17:07:20.823927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.823959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.831069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fef90 00:20:05.343 [2024-04-18 17:07:20.831904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.831942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.845764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ef270 00:20:05.343 [2024-04-18 17:07:20.847271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.847301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.859240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e23b8 00:20:05.343 [2024-04-18 17:07:20.860941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.860972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.872753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ddc00 00:20:05.343 [2024-04-18 17:07:20.874687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.874719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.886308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fcdd0 00:20:05.343 [2024-04-18 17:07:20.888342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.888373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.900016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f1430 00:20:05.343 [2024-04-18 17:07:20.902320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.902350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.909308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f6cc8 00:20:05.343 [2024-04-18 17:07:20.910302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.910332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.921603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f2510 00:20:05.343 [2024-04-18 17:07:20.922670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.922696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.935160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e8d30 00:20:05.343 [2024-04-18 17:07:20.936322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.936353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.948687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ee5c8 00:20:05.343 [2024-04-18 17:07:20.950050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.950081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.962289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e5a90 00:20:05.343 [2024-04-18 17:07:20.963841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.963874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.975894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f4f40 00:20:05.343 [2024-04-18 17:07:20.977657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.343 [2024-04-18 17:07:20.977683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:05.343 [2024-04-18 17:07:20.988003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e2c28 00:20:05.343 [2024-04-18 17:07:20.989189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.344 [2024-04-18 17:07:20.989220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.344 [2024-04-18 17:07:21.001133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f31b8 00:20:05.344 [2024-04-18 17:07:21.002139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.344 [2024-04-18 17:07:21.002171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:05.344 [2024-04-18 17:07:21.014862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ec408 00:20:05.344 [2024-04-18 17:07:21.016065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.344 [2024-04-18 17:07:21.016098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.344 [2024-04-18 17:07:21.027114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f6cc8 00:20:05.344 [2024-04-18 17:07:21.029000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.344 [2024-04-18 17:07:21.029031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.344 [2024-04-18 17:07:21.038281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fa3a0 00:20:05.344 [2024-04-18 17:07:21.039282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.344 [2024-04-18 17:07:21.039312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.051902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ddc00 00:20:05.603 [2024-04-18 17:07:21.053068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.053099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.065472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ee5c8 00:20:05.603 [2024-04-18 17:07:21.066829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.066861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.079150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190df550 00:20:05.603 [2024-04-18 17:07:21.080673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.080701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.092864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ed4e8 00:20:05.603 [2024-04-18 17:07:21.094576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.094605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.106492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f3e60 00:20:05.603 [2024-04-18 17:07:21.108396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.108448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.120152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f7538 00:20:05.603 [2024-04-18 17:07:21.122228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.122260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.133697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ed920 00:20:05.603 [2024-04-18 17:07:21.135916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.135947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.142868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190feb58 00:20:05.603 [2024-04-18 17:07:21.143863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.143894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.156344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f57b0 00:20:05.603 [2024-04-18 17:07:21.157540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.157568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.169908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ed4e8 00:20:05.603 [2024-04-18 17:07:21.171271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.171309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.182115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ee5c8 00:20:05.603 [2024-04-18 17:07:21.183471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.183498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.195691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e4578 00:20:05.603 [2024-04-18 17:07:21.197219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.197249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.209258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f6cc8 00:20:05.603 [2024-04-18 17:07:21.210974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.211006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.222767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fac10 00:20:05.603 [2024-04-18 17:07:21.224701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.224733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.236352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ea248 00:20:05.603 [2024-04-18 17:07:21.238437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.238465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.249979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e1710 00:20:05.603 [2024-04-18 17:07:21.252211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.252241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.259264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f1ca0 00:20:05.603 [2024-04-18 17:07:21.260267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.260298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.272372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fe720 00:20:05.603 [2024-04-18 17:07:21.273394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.273438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:05.603 [2024-04-18 17:07:21.285686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e12d8 00:20:05.603 [2024-04-18 17:07:21.286866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.603 [2024-04-18 17:07:21.286897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:05.604 [2024-04-18 17:07:21.299208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e1f80 00:20:05.604 [2024-04-18 17:07:21.300584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.604 [2024-04-18 17:07:21.300611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:05.867 [2024-04-18 17:07:21.312884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190de8a8 00:20:05.867 [2024-04-18 17:07:21.314424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.867 [2024-04-18 17:07:21.314453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:05.867 [2024-04-18 17:07:21.324920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ebb98 00:20:05.867 [2024-04-18 17:07:21.326331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.867 [2024-04-18 17:07:21.326359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.867 [2024-04-18 17:07:21.337395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f1430 00:20:05.867 [2024-04-18 17:07:21.338924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.867 [2024-04-18 17:07:21.338951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:05.867 [2024-04-18 17:07:21.350750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f6cc8 00:20:05.867 [2024-04-18 17:07:21.352636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.867 [2024-04-18 17:07:21.352679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.867 [2024-04-18 17:07:21.364262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e9e10 00:20:05.867 [2024-04-18 17:07:21.366314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.867 [2024-04-18 17:07:21.366345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:05.867 [2024-04-18 17:07:21.377657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e2c28 00:20:05.867 [2024-04-18 17:07:21.379911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.867 [2024-04-18 17:07:21.379943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.867 [2024-04-18 17:07:21.386806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ec408 00:20:05.867 [2024-04-18 17:07:21.387794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.867 [2024-04-18 17:07:21.387825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:05.867 [2024-04-18 17:07:21.400313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f46d0 00:20:05.867 [2024-04-18 17:07:21.401492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.867 [2024-04-18 17:07:21.401535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:05.867 [2024-04-18 17:07:21.413469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e49b0 00:20:05.867 [2024-04-18 17:07:21.414603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.867 [2024-04-18 17:07:21.414632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:05.867 [2024-04-18 17:07:21.425470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fc128 00:20:05.867 [2024-04-18 17:07:21.426675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.867 [2024-04-18 17:07:21.426701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:05.867 [2024-04-18 17:07:21.438564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f0ff8 00:20:05.867 [2024-04-18 17:07:21.439751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.867 [2024-04-18 17:07:21.439783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:05.867 [2024-04-18 17:07:21.453755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f5be8 00:20:05.867 [2024-04-18 17:07:21.455473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.868 [2024-04-18 17:07:21.455501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:05.868 [2024-04-18 17:07:21.464697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ec408 00:20:05.868 [2024-04-18 17:07:21.465471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.868 [2024-04-18 17:07:21.465499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:05.868 [2024-04-18 17:07:21.478004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fbcf0 00:20:05.868 [2024-04-18 17:07:21.478965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.868 [2024-04-18 17:07:21.478996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:05.868 [2024-04-18 17:07:21.491486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f9b30 00:20:05.868 [2024-04-18 17:07:21.492632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.868 [2024-04-18 17:07:21.492659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.868 [2024-04-18 17:07:21.503583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f0ff8 00:20:05.868 [2024-04-18 17:07:21.505502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.868 [2024-04-18 17:07:21.505535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:05.868 [2024-04-18 17:07:21.514718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f3a28 00:20:05.868 [2024-04-18 17:07:21.515704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.868 [2024-04-18 17:07:21.515731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:05.868 [2024-04-18 17:07:21.528266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f3e60 00:20:05.868 [2024-04-18 17:07:21.529426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.868 [2024-04-18 17:07:21.529454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:05.868 [2024-04-18 17:07:21.541849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e27f0 00:20:05.868 [2024-04-18 17:07:21.543196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.868 [2024-04-18 17:07:21.543227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:05.868 [2024-04-18 17:07:21.555404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190fef90 00:20:05.868 [2024-04-18 17:07:21.556918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.868 [2024-04-18 17:07:21.556948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:05.868 [2024-04-18 17:07:21.569010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e9168 00:20:05.868 [2024-04-18 17:07:21.570771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.868 [2024-04-18 17:07:21.570802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:06.126 [2024-04-18 17:07:21.582607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190f92c0 00:20:06.126 [2024-04-18 17:07:21.584490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.126 [2024-04-18 17:07:21.584518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:06.126 [2024-04-18 17:07:21.596218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190e3498 00:20:06.126 [2024-04-18 17:07:21.598247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.126 [2024-04-18 17:07:21.598278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:06.126 [2024-04-18 17:07:21.609792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014830) with pdu=0x2000190ebb98 00:20:06.126 [2024-04-18 17:07:21.612025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.126 [2024-04-18 17:07:21.612057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.126 00:20:06.126 Latency(us) 00:20:06.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.126 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:06.126 nvme0n1 : 2.00 20195.48 78.89 0.00 0.00 6328.08 2342.31 16408.27 00:20:06.126 =================================================================================================================== 00:20:06.126 Total : 20195.48 78.89 0.00 0.00 6328.08 2342.31 16408.27 00:20:06.126 0 00:20:06.126 17:07:21 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:06.126 17:07:21 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:06.126 17:07:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:06.126 17:07:21 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:06.126 | .driver_specific 00:20:06.126 | .nvme_error 00:20:06.126 | .status_code 00:20:06.126 | .command_transient_transport_error' 00:20:06.385 17:07:21 -- host/digest.sh@71 -- # (( 158 > 0 )) 00:20:06.385 17:07:21 -- host/digest.sh@73 -- # killprocess 1758348 00:20:06.385 17:07:21 -- common/autotest_common.sh@936 -- # '[' -z 1758348 ']' 00:20:06.385 17:07:21 -- common/autotest_common.sh@940 -- # kill -0 1758348 00:20:06.385 17:07:21 -- common/autotest_common.sh@941 -- # uname 00:20:06.385 17:07:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.385 17:07:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1758348 00:20:06.386 17:07:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:06.386 17:07:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:06.386 17:07:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1758348' 00:20:06.386 killing process with pid 1758348 00:20:06.386 17:07:21 -- common/autotest_common.sh@955 -- # kill 1758348 00:20:06.386 Received shutdown signal, test time was about 2.000000 seconds 00:20:06.386 00:20:06.386 Latency(us) 00:20:06.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.386 =================================================================================================================== 00:20:06.386 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.386 17:07:21 -- common/autotest_common.sh@960 -- # wait 1758348 00:20:06.644 17:07:22 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:06.644 17:07:22 -- host/digest.sh@54 -- # local rw bs qd 00:20:06.644 17:07:22 -- host/digest.sh@56 -- # rw=randwrite 00:20:06.644 17:07:22 -- host/digest.sh@56 -- # bs=131072 00:20:06.644 17:07:22 -- host/digest.sh@56 -- # qd=16 00:20:06.644 17:07:22 -- host/digest.sh@58 -- # bperfpid=1758781 00:20:06.644 17:07:22 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:06.644 17:07:22 -- host/digest.sh@60 -- # waitforlisten 1758781 /var/tmp/bperf.sock 00:20:06.644 17:07:22 -- common/autotest_common.sh@817 -- # '[' -z 1758781 ']' 00:20:06.644 17:07:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:06.644 17:07:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:06.644 17:07:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:06.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:06.644 17:07:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:06.644 17:07:22 -- common/autotest_common.sh@10 -- # set +x 00:20:06.644 [2024-04-18 17:07:22.196106] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:06.644 [2024-04-18 17:07:22.196186] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1758781 ] 00:20:06.644 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:06.644 Zero copy mechanism will not be used. 00:20:06.644 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.644 [2024-04-18 17:07:22.260052] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.902 [2024-04-18 17:07:22.374006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.902 17:07:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:06.902 17:07:22 -- common/autotest_common.sh@850 -- # return 0 00:20:06.902 17:07:22 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:06.902 17:07:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:07.160 17:07:22 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:07.160 17:07:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.160 17:07:22 -- common/autotest_common.sh@10 -- # set +x 00:20:07.160 17:07:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.160 17:07:22 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:07.160 17:07:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:07.418 nvme0n1 00:20:07.418 17:07:23 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:07.418 17:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.418 17:07:23 -- common/autotest_common.sh@10 -- # set +x 00:20:07.418 17:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.418 17:07:23 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:07.418 17:07:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:07.675 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:07.675 Zero copy mechanism will not be used. 00:20:07.675 Running I/O for 2 seconds... 00:20:07.675 [2024-04-18 17:07:23.188033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.188430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.188487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.195782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.196091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.196121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.203159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.203507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.203551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.210367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.210681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.210711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.218069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.218395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.218424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.226237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.226577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.226606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.233757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.234085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.234113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.240591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.240931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.240958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.247578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.247880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.247909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.254762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.255188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.255218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.261986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.262289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.262318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.268697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.269020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.269048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.275690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.275993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.675 [2024-04-18 17:07:23.276022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.675 [2024-04-18 17:07:23.283009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.675 [2024-04-18 17:07:23.283318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.283355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.289995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.290337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.290388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.296877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.297200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.297227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.303968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.304328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.304357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.311129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.311439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.311467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.317537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.317840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.317869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.323996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.324298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.324327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.330928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.331258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.331302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.338065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.338368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.338404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.344877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.345347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.345399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.352189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.352541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.352569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.361089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.361429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.361458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.369784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.370140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.370183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.676 [2024-04-18 17:07:23.378400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.676 [2024-04-18 17:07:23.378732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.676 [2024-04-18 17:07:23.378776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.934 [2024-04-18 17:07:23.387046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.934 [2024-04-18 17:07:23.387351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-18 17:07:23.387388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.934 [2024-04-18 17:07:23.395412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.934 [2024-04-18 17:07:23.395746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-18 17:07:23.395788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.934 [2024-04-18 17:07:23.403909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.934 [2024-04-18 17:07:23.404265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-18 17:07:23.404308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.934 [2024-04-18 17:07:23.412431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.934 [2024-04-18 17:07:23.412752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-18 17:07:23.412780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.934 [2024-04-18 17:07:23.420619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.934 [2024-04-18 17:07:23.420936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-18 17:07:23.420965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.934 [2024-04-18 17:07:23.429317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.934 [2024-04-18 17:07:23.429638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-18 17:07:23.429667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.934 [2024-04-18 17:07:23.438090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.934 [2024-04-18 17:07:23.438413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.438441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.446510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.446832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.446861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.455181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.455505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.455534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.462900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.463219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.463248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.471447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.471766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.471795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.480646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.480962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.480991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.489255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.489580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.489617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.497882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.498227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.498256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.505071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.505201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.505229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.513755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.514072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.514101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.522395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.522727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.522756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.530630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.530931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.530959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.539485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.539789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.539818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.547318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.547730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.547772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.556358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.556699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.556743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.564664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.564988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.565032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.573096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.573418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.573447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.581912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.582229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.582282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.590005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.590305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.590334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.598682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.598984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.599013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.607014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.607321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.607350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.615377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.615705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.615734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.623733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.624057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.624089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.935 [2024-04-18 17:07:23.632247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:07.935 [2024-04-18 17:07:23.632570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.935 [2024-04-18 17:07:23.632613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.640508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.640827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.197 [2024-04-18 17:07:23.640855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.649100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.649432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.197 [2024-04-18 17:07:23.649460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.657732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.658037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.197 [2024-04-18 17:07:23.658066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.665831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.666149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.197 [2024-04-18 17:07:23.666178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.674816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.675131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.197 [2024-04-18 17:07:23.675160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.683691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.684008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.197 [2024-04-18 17:07:23.684036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.692428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.692746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.197 [2024-04-18 17:07:23.692774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.701237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.701552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.197 [2024-04-18 17:07:23.701580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.709803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.710126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.197 [2024-04-18 17:07:23.710154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.718187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.718497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.197 [2024-04-18 17:07:23.718527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.726749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.727064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.197 [2024-04-18 17:07:23.727093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.735106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.735415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.197 [2024-04-18 17:07:23.735450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.197 [2024-04-18 17:07:23.742811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.197 [2024-04-18 17:07:23.743114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.743143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.751071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.751416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.751463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.758827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.758921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.758949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.765940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.766267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.766310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.772736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.773065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.773107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.779482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.779786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.779815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.785572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.785919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.785946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.792497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.792830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.792872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.800290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.800614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.800643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.808814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.809129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.809158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.816884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.817187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.817216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.824353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.824699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.824727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.831533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.831860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.831905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.838336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.838644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.838678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.845017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.845327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.845355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.852026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.852360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.852396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.858757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.859084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.859111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.865532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.865836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.865866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.872793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.873137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.873164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.880052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.880407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.880437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.887259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.887353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.887388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.894598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.894982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.895010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.198 [2024-04-18 17:07:23.901466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.198 [2024-04-18 17:07:23.901802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.198 [2024-04-18 17:07:23.901844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.908690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.908994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.909022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.915617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.915948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.915976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.922422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.922769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.922797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.929124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.929220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.929247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.935749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.936064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.936091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.942314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.942618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.942646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.949816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.949915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.949943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.956453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.956740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.956769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.962977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.963285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.963317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.969469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.969771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.969798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.976328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.976617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.976645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.982584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.982886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.982914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.988748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.989083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.989125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:23.995671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:23.995969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:23.996010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:24.002085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:24.002369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:24.002405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:24.009040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:24.009391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:24.009419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:24.015212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:24.015514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:24.015548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:24.021516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:24.021817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:24.021846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:24.028025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:24.028325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:24.028353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:24.034324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:24.034611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:24.034641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:24.040576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:24.040864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.460 [2024-04-18 17:07:24.040907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.460 [2024-04-18 17:07:24.047154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.460 [2024-04-18 17:07:24.047448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.047478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.053124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.053438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.053467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.059490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.059787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.059816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.065754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.066038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.066082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.072039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.072329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.072357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.078378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.078676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.078706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.085006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.085330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.085372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.091880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.092256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.092284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.098654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.098940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.098969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.106108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.106416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.106445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.114617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.115059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.115086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.122578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.122866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.122893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.130599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.131021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.131048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.137596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.137911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.137958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.144488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.144773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.144802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.151132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.151427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.151454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.157599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.157886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.157913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.461 [2024-04-18 17:07:24.164210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.461 [2024-04-18 17:07:24.164505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.461 [2024-04-18 17:07:24.164533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.170550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.170866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.170895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.176933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.177219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.177263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.183256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.183547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.183577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.189640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.189942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.189968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.196229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.196524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.196552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.202744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.203093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.203139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.208861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.209159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.209203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.215286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.215579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.215608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.221671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.221958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.222003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.228040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.228359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.228395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.234415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.234702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.234731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.240189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.240480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.240509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.246506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.246849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.246878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.252306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.252596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.252626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.258648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.258946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.258974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.264765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.265065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.265094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.271608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.271898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.271928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.279198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.721 [2024-04-18 17:07:24.279497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.721 [2024-04-18 17:07:24.279526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.721 [2024-04-18 17:07:24.286559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.286848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.286876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.293233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.293525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.293553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.299481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.299766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.299800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.305820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.306106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.306134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.312539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.312893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.312935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.320196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.320493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.320522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.326356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.326654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.326681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.332570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.332878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.332910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.339046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.339331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.339358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.345300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.345607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.345635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.351601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.351915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.351943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.358179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.358475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.358505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.364695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.364980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.365008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.371003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.371318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.371346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.377464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.377749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.377792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.383739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.384038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.384081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.390137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.390477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.390504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.396238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.396533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.396576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.402840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.403124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.403166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.409078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.409402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.409430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.415301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.415647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.415676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.722 [2024-04-18 17:07:24.421615] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.722 [2024-04-18 17:07:24.421951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.722 [2024-04-18 17:07:24.421979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.983 [2024-04-18 17:07:24.428103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.428395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.428424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.434633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.434919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.434946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.441021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.441332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.441390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.447160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.447455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.447484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.453658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.453955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.453981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.460138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.460431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.460459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.466527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.466813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.466846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.472849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.473133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.473161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.479135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.479433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.479462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.485315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.485602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.485629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.491579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.491867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.491895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.497973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.498312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.498339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.504015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.504301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.504329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.510390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.510691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.510718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.516900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.517198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.517225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.523231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.523530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.523558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.529828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.530128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.530155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.536126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.536496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.536524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.542281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.542573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.542602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.548719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.549003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.549032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.555123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.555445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.555473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.561561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.561865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.561907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.568211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.568507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.568536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.574787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.575088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.575113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.581158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.581452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.581480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.587564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.587852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.587880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.984 [2024-04-18 17:07:24.593715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.984 [2024-04-18 17:07:24.594030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.984 [2024-04-18 17:07:24.594056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.600118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.600441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.600469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.606847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.607133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.607180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.613409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.613710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.613753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.619624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.619910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.619940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.626084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.626392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.626421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.632421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.632707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.632739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.638744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.639044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.639071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.645060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.645359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.645409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.651119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.651466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.651494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.657654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.657951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.657977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.664164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.664505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.664533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.671414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.671740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.671769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.677820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.678119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.678147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.985 [2024-04-18 17:07:24.684119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:08.985 [2024-04-18 17:07:24.684416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.985 [2024-04-18 17:07:24.684444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.690472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.690772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.690801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.696485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.696755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.696796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.702691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.702959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.702986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.708919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.709203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.709231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.715073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.715357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.715406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.721287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.721566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.721594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.727410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.727680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.727708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.733932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.734216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.734243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.740116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.740398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.740433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.746363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.746644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.746671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.752675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.752960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.752988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.758810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.759095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.759136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.765013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.765346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.765373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.771579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.771861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.771888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.777398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.777699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.777727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.783893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.784163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.784191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.790271] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.790548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.790575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.795950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.796234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.796276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.801946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.802261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.802289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.808185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.808484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.808511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.814513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.814798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.814824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.820697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.820965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.820993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.826915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.827194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.827222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.833103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.833373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.833409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.839405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.839696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.839724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.247 [2024-04-18 17:07:24.845689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.247 [2024-04-18 17:07:24.845960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.247 [2024-04-18 17:07:24.845987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.852144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.852421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.852450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.858122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.858414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.858457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.864931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.865201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.865229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.871162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.871441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.871469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.878056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.878444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.878472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.886081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.886488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.886516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.894305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.894682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.894726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.902458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.902758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.902786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.910569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.910893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.910932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.918652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.918937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.918966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.926671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.927076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.927103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.935011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.935314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.935341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.943004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.943454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.943483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.248 [2024-04-18 17:07:24.951431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.248 [2024-04-18 17:07:24.951727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.248 [2024-04-18 17:07:24.951755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.507 [2024-04-18 17:07:24.959774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.507 [2024-04-18 17:07:24.960101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.507 [2024-04-18 17:07:24.960129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.507 [2024-04-18 17:07:24.967853] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.507 [2024-04-18 17:07:24.968127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.507 [2024-04-18 17:07:24.968156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.507 [2024-04-18 17:07:24.976146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.507 [2024-04-18 17:07:24.976550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.507 [2024-04-18 17:07:24.976579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:24.983230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:24.983520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:24.983549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:24.989790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:24.990081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:24.990110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:24.996369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:24.996647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:24.996675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.002997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.003268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.003296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.009812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.010180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.010207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.017774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.018139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.018168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.026038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.026442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.026470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.033743] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.034008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.034035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.041960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.042250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.042278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.049863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.050226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.050255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.057592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.057938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.057966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.065527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.065895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.065923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.072551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.072841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.072869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.080411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.080771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.080799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.087196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.087496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.087524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.093221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.093501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.093528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.099595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.099847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.099875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.106465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.106730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.106767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.114435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.114814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.114843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.122695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.123024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.123054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.130760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.131126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.131154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.139020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.139403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.139431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.147131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.147441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.147469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.154667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.154930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.154960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.160617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.160858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.160886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.166695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.166954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.166982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.172299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.508 [2024-04-18 17:07:25.172605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.508 [2024-04-18 17:07:25.172634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.508 [2024-04-18 17:07:25.178059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.509 [2024-04-18 17:07:25.178299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.509 [2024-04-18 17:07:25.178327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.509 [2024-04-18 17:07:25.184692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2014b70) with pdu=0x2000190fef90 00:20:09.509 [2024-04-18 17:07:25.184972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.509 [2024-04-18 17:07:25.185000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.509 00:20:09.509 Latency(us) 00:20:09.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.509 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:09.509 nvme0n1 : 2.00 4401.21 550.15 0.00 0.00 3627.73 1990.35 9126.49 00:20:09.509 =================================================================================================================== 00:20:09.509 Total : 4401.21 550.15 0.00 0.00 3627.73 1990.35 9126.49 00:20:09.509 0 00:20:09.509 17:07:25 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:09.509 17:07:25 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:09.509 17:07:25 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:09.509 | .driver_specific 00:20:09.509 | .nvme_error 00:20:09.509 | .status_code 00:20:09.509 | .command_transient_transport_error' 00:20:09.509 17:07:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:09.766 17:07:25 -- host/digest.sh@71 -- # (( 284 > 0 )) 00:20:09.766 17:07:25 -- host/digest.sh@73 -- # killprocess 1758781 00:20:09.766 17:07:25 -- common/autotest_common.sh@936 -- # '[' -z 1758781 ']' 00:20:09.766 17:07:25 -- common/autotest_common.sh@940 -- # kill -0 1758781 00:20:09.766 17:07:25 -- common/autotest_common.sh@941 -- # uname 00:20:09.766 17:07:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:09.766 17:07:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1758781 00:20:10.027 17:07:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:10.027 17:07:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:10.027 17:07:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1758781' 00:20:10.027 killing process with pid 1758781 00:20:10.027 17:07:25 -- common/autotest_common.sh@955 -- # kill 1758781 00:20:10.027 Received shutdown signal, test time was about 2.000000 seconds 00:20:10.027 00:20:10.027 Latency(us) 00:20:10.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.027 =================================================================================================================== 00:20:10.027 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.027 17:07:25 -- common/autotest_common.sh@960 -- # wait 1758781 00:20:10.285 17:07:25 -- host/digest.sh@116 -- # killprocess 1757387 00:20:10.285 17:07:25 -- common/autotest_common.sh@936 -- # '[' -z 1757387 ']' 00:20:10.285 17:07:25 -- common/autotest_common.sh@940 -- # kill -0 1757387 00:20:10.285 17:07:25 -- common/autotest_common.sh@941 -- # uname 00:20:10.285 17:07:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:10.285 17:07:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1757387 00:20:10.285 17:07:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:10.285 17:07:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:10.285 17:07:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1757387' 00:20:10.285 killing process with pid 1757387 00:20:10.285 17:07:25 -- common/autotest_common.sh@955 -- # kill 1757387 00:20:10.285 17:07:25 -- common/autotest_common.sh@960 -- # wait 1757387 00:20:10.546 00:20:10.546 real 0m15.528s 00:20:10.546 user 0m30.742s 00:20:10.546 sys 0m4.095s 00:20:10.546 17:07:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:10.546 17:07:26 -- common/autotest_common.sh@10 -- # set +x 00:20:10.546 ************************************ 00:20:10.546 END TEST nvmf_digest_error 00:20:10.546 ************************************ 00:20:10.546 17:07:26 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:10.546 17:07:26 -- host/digest.sh@150 -- # nvmftestfini 00:20:10.546 17:07:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:10.546 17:07:26 -- nvmf/common.sh@117 -- # sync 00:20:10.546 17:07:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.546 17:07:26 -- nvmf/common.sh@120 -- # set +e 00:20:10.546 17:07:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.546 17:07:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.546 rmmod nvme_tcp 00:20:10.546 rmmod nvme_fabrics 00:20:10.546 rmmod nvme_keyring 00:20:10.546 17:07:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:10.546 17:07:26 -- nvmf/common.sh@124 -- # set -e 00:20:10.546 17:07:26 -- nvmf/common.sh@125 -- # return 0 00:20:10.546 17:07:26 -- nvmf/common.sh@478 -- # '[' -n 1757387 ']' 00:20:10.546 17:07:26 -- nvmf/common.sh@479 -- # killprocess 1757387 00:20:10.546 17:07:26 -- common/autotest_common.sh@936 -- # '[' -z 1757387 ']' 00:20:10.546 17:07:26 -- common/autotest_common.sh@940 -- # kill -0 1757387 00:20:10.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1757387) - No such process 00:20:10.546 17:07:26 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1757387 is not found' 00:20:10.546 Process with pid 1757387 is not found 00:20:10.546 17:07:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:10.546 17:07:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:10.546 17:07:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:10.546 17:07:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.546 17:07:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:10.546 17:07:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.546 17:07:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.546 17:07:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.081 17:07:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:13.081 00:20:13.081 real 0m37.515s 00:20:13.081 user 1m6.128s 00:20:13.081 sys 0m9.915s 00:20:13.081 17:07:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:13.081 17:07:28 -- common/autotest_common.sh@10 -- # set +x 00:20:13.081 ************************************ 00:20:13.081 END TEST nvmf_digest 00:20:13.081 ************************************ 00:20:13.081 17:07:28 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:20:13.081 17:07:28 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:20:13.081 17:07:28 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:20:13.081 17:07:28 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:20:13.081 17:07:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:13.081 17:07:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:13.081 17:07:28 -- common/autotest_common.sh@10 -- # set +x 00:20:13.081 ************************************ 00:20:13.081 START TEST nvmf_bdevperf 00:20:13.081 ************************************ 00:20:13.082 17:07:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:20:13.082 * Looking for test storage... 00:20:13.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:13.082 17:07:28 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.082 17:07:28 -- nvmf/common.sh@7 -- # uname -s 00:20:13.082 17:07:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.082 17:07:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.082 17:07:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.082 17:07:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.082 17:07:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.082 17:07:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.082 17:07:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.082 17:07:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.082 17:07:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.082 17:07:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.082 17:07:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.082 17:07:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.082 17:07:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.082 17:07:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.082 17:07:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.082 17:07:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.082 17:07:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.082 17:07:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.082 17:07:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.082 17:07:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.082 17:07:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.082 17:07:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.082 17:07:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.082 17:07:28 -- paths/export.sh@5 -- # export PATH 00:20:13.082 17:07:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.082 17:07:28 -- nvmf/common.sh@47 -- # : 0 00:20:13.082 17:07:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.082 17:07:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.082 17:07:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.082 17:07:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.082 17:07:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.082 17:07:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.082 17:07:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.082 17:07:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.082 17:07:28 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:13.082 17:07:28 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:13.082 17:07:28 -- host/bdevperf.sh@24 -- # nvmftestinit 00:20:13.082 17:07:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:13.082 17:07:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.082 17:07:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:13.082 17:07:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:13.082 17:07:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:13.082 17:07:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.082 17:07:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.082 17:07:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.082 17:07:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:13.082 17:07:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:13.082 17:07:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:13.082 17:07:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.984 17:07:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:14.984 17:07:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.984 17:07:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.984 17:07:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.984 17:07:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.984 17:07:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.984 17:07:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.984 17:07:30 -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.984 17:07:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.984 17:07:30 -- nvmf/common.sh@296 -- # e810=() 00:20:14.984 17:07:30 -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.984 17:07:30 -- nvmf/common.sh@297 -- # x722=() 00:20:14.984 17:07:30 -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.984 17:07:30 -- nvmf/common.sh@298 -- # mlx=() 00:20:14.984 17:07:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.984 17:07:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.984 17:07:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.984 17:07:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.984 17:07:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.984 17:07:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.984 17:07:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.984 17:07:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.984 17:07:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.984 17:07:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.984 17:07:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.984 17:07:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.984 17:07:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.984 17:07:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.984 17:07:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.984 17:07:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.984 17:07:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:14.984 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:14.984 17:07:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.984 17:07:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:14.984 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:14.984 17:07:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.984 17:07:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.984 17:07:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.984 17:07:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.984 17:07:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:14.984 17:07:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.984 17:07:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:14.984 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:14.984 17:07:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.984 17:07:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.984 17:07:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.984 17:07:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:14.984 17:07:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.984 17:07:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:14.984 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:14.985 17:07:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.985 17:07:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:14.985 17:07:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:14.985 17:07:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:14.985 17:07:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:14.985 17:07:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:14.985 17:07:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.985 17:07:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.985 17:07:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.985 17:07:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.985 17:07:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.985 17:07:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.985 17:07:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.985 17:07:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.985 17:07:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.985 17:07:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.985 17:07:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.985 17:07:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.985 17:07:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.985 17:07:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.985 17:07:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.985 17:07:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.985 17:07:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.985 17:07:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.985 17:07:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.985 17:07:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:20:14.985 00:20:14.985 --- 10.0.0.2 ping statistics --- 00:20:14.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.985 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:20:14.985 17:07:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:20:14.985 00:20:14.985 --- 10.0.0.1 ping statistics --- 00:20:14.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.985 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:20:14.985 17:07:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.985 17:07:30 -- nvmf/common.sh@411 -- # return 0 00:20:14.985 17:07:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:14.985 17:07:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.985 17:07:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:14.985 17:07:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:14.985 17:07:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.985 17:07:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:14.985 17:07:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:14.985 17:07:30 -- host/bdevperf.sh@25 -- # tgt_init 00:20:14.985 17:07:30 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:14.985 17:07:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:14.985 17:07:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:14.985 17:07:30 -- common/autotest_common.sh@10 -- # set +x 00:20:14.985 17:07:30 -- nvmf/common.sh@470 -- # nvmfpid=1761233 00:20:14.985 17:07:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:14.985 17:07:30 -- nvmf/common.sh@471 -- # waitforlisten 1761233 00:20:14.985 17:07:30 -- common/autotest_common.sh@817 -- # '[' -z 1761233 ']' 00:20:14.985 17:07:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.985 17:07:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:14.985 17:07:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.985 17:07:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:14.985 17:07:30 -- common/autotest_common.sh@10 -- # set +x 00:20:14.985 [2024-04-18 17:07:30.469652] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:14.985 [2024-04-18 17:07:30.469760] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.985 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.985 [2024-04-18 17:07:30.534621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.985 [2024-04-18 17:07:30.646335] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.985 [2024-04-18 17:07:30.646412] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.985 [2024-04-18 17:07:30.646435] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.985 [2024-04-18 17:07:30.646447] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.985 [2024-04-18 17:07:30.646458] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.985 [2024-04-18 17:07:30.646541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.985 [2024-04-18 17:07:30.646606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.985 [2024-04-18 17:07:30.646608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.244 17:07:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:15.244 17:07:30 -- common/autotest_common.sh@850 -- # return 0 00:20:15.244 17:07:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:15.244 17:07:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:15.244 17:07:30 -- common/autotest_common.sh@10 -- # set +x 00:20:15.244 17:07:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.244 17:07:30 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:15.244 17:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.244 17:07:30 -- common/autotest_common.sh@10 -- # set +x 00:20:15.244 [2024-04-18 17:07:30.800390] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.244 17:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.244 17:07:30 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:15.244 17:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.244 17:07:30 -- common/autotest_common.sh@10 -- # set +x 00:20:15.244 Malloc0 00:20:15.244 17:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.244 17:07:30 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:15.244 17:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.244 17:07:30 -- common/autotest_common.sh@10 -- # set +x 00:20:15.244 17:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.244 17:07:30 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:15.244 17:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.244 17:07:30 -- common/autotest_common.sh@10 -- # set +x 00:20:15.244 17:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.244 17:07:30 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.244 17:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.244 17:07:30 -- common/autotest_common.sh@10 -- # set +x 00:20:15.244 [2024-04-18 17:07:30.868826] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.244 17:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.244 17:07:30 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:20:15.244 17:07:30 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:20:15.244 17:07:30 -- nvmf/common.sh@521 -- # config=() 00:20:15.244 17:07:30 -- nvmf/common.sh@521 -- # local subsystem config 00:20:15.244 17:07:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:15.244 17:07:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:15.244 { 00:20:15.244 "params": { 00:20:15.244 "name": "Nvme$subsystem", 00:20:15.244 "trtype": "$TEST_TRANSPORT", 00:20:15.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.244 "adrfam": "ipv4", 00:20:15.244 "trsvcid": "$NVMF_PORT", 00:20:15.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.244 "hdgst": ${hdgst:-false}, 00:20:15.244 "ddgst": ${ddgst:-false} 00:20:15.244 }, 00:20:15.244 "method": "bdev_nvme_attach_controller" 00:20:15.244 } 00:20:15.244 EOF 00:20:15.244 )") 00:20:15.244 17:07:30 -- nvmf/common.sh@543 -- # cat 00:20:15.244 17:07:30 -- nvmf/common.sh@545 -- # jq . 00:20:15.244 17:07:30 -- nvmf/common.sh@546 -- # IFS=, 00:20:15.244 17:07:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:15.244 "params": { 00:20:15.244 "name": "Nvme1", 00:20:15.244 "trtype": "tcp", 00:20:15.244 "traddr": "10.0.0.2", 00:20:15.244 "adrfam": "ipv4", 00:20:15.244 "trsvcid": "4420", 00:20:15.244 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.244 "hdgst": false, 00:20:15.244 "ddgst": false 00:20:15.244 }, 00:20:15.244 "method": "bdev_nvme_attach_controller" 00:20:15.244 }' 00:20:15.244 [2024-04-18 17:07:30.914988] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:15.244 [2024-04-18 17:07:30.915067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761262 ] 00:20:15.244 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.503 [2024-04-18 17:07:30.975622] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.503 [2024-04-18 17:07:31.092276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.763 Running I/O for 1 seconds... 00:20:16.699 00:20:16.699 Latency(us) 00:20:16.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.699 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:16.699 Verification LBA range: start 0x0 length 0x4000 00:20:16.699 Nvme1n1 : 1.01 8486.00 33.15 0.00 0.00 15024.90 3373.89 19709.35 00:20:16.699 =================================================================================================================== 00:20:16.699 Total : 8486.00 33.15 0.00 0.00 15024.90 3373.89 19709.35 00:20:17.191 17:07:32 -- host/bdevperf.sh@30 -- # bdevperfpid=1761520 00:20:17.191 17:07:32 -- host/bdevperf.sh@32 -- # sleep 3 00:20:17.191 17:07:32 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:20:17.191 17:07:32 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:20:17.191 17:07:32 -- nvmf/common.sh@521 -- # config=() 00:20:17.191 17:07:32 -- nvmf/common.sh@521 -- # local subsystem config 00:20:17.191 17:07:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.191 17:07:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.191 { 00:20:17.191 "params": { 00:20:17.191 "name": "Nvme$subsystem", 00:20:17.191 "trtype": "$TEST_TRANSPORT", 00:20:17.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.191 "adrfam": "ipv4", 00:20:17.191 "trsvcid": "$NVMF_PORT", 00:20:17.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.191 "hdgst": ${hdgst:-false}, 00:20:17.191 "ddgst": ${ddgst:-false} 00:20:17.191 }, 00:20:17.191 "method": "bdev_nvme_attach_controller" 00:20:17.191 } 00:20:17.192 EOF 00:20:17.192 )") 00:20:17.192 17:07:32 -- nvmf/common.sh@543 -- # cat 00:20:17.192 17:07:32 -- nvmf/common.sh@545 -- # jq . 00:20:17.192 17:07:32 -- nvmf/common.sh@546 -- # IFS=, 00:20:17.192 17:07:32 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:17.192 "params": { 00:20:17.192 "name": "Nvme1", 00:20:17.192 "trtype": "tcp", 00:20:17.192 "traddr": "10.0.0.2", 00:20:17.192 "adrfam": "ipv4", 00:20:17.192 "trsvcid": "4420", 00:20:17.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.192 "hdgst": false, 00:20:17.192 "ddgst": false 00:20:17.192 }, 00:20:17.192 "method": "bdev_nvme_attach_controller" 00:20:17.192 }' 00:20:17.192 [2024-04-18 17:07:32.588732] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:17.192 [2024-04-18 17:07:32.588830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761520 ] 00:20:17.192 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.192 [2024-04-18 17:07:32.647833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.192 [2024-04-18 17:07:32.752471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.451 Running I/O for 15 seconds... 00:20:19.987 17:07:35 -- host/bdevperf.sh@33 -- # kill -9 1761233 00:20:19.987 17:07:35 -- host/bdevperf.sh@35 -- # sleep 3 00:20:19.987 [2024-04-18 17:07:35.563166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.987 [2024-04-18 17:07:35.563899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.987 [2024-04-18 17:07:35.563917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.563932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.563949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.563964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.563981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.563996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.564976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.564993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.565008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.565025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.565040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.565058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.565073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.565089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.565104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.565121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.565136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.565153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.565168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.565185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.565200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.988 [2024-04-18 17:07:35.565217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.988 [2024-04-18 17:07:35.565231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.565979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.565996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.989 [2024-04-18 17:07:35.566208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.989 [2024-04-18 17:07:35.566535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.989 [2024-04-18 17:07:35.566550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.566974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.566995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-04-18 17:07:35.567520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f85a0 is same with the state(5) to be set 00:20:19.990 [2024-04-18 17:07:35.567554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:19.990 [2024-04-18 17:07:35.567566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:19.990 [2024-04-18 17:07:35.567579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45384 len:8 PRP1 0x0 PRP2 0x0 00:20:19.990 [2024-04-18 17:07:35.567593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.990 [2024-04-18 17:07:35.567673] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10f85a0 was disconnected and freed. reset controller. 00:20:19.990 [2024-04-18 17:07:35.571524] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.990 [2024-04-18 17:07:35.571599] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:19.990 [2024-04-18 17:07:35.572443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.990 [2024-04-18 17:07:35.572603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.990 [2024-04-18 17:07:35.572631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:19.990 [2024-04-18 17:07:35.572649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:19.990 [2024-04-18 17:07:35.572897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:19.990 [2024-04-18 17:07:35.573139] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.990 [2024-04-18 17:07:35.573163] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.990 [2024-04-18 17:07:35.573180] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.990 [2024-04-18 17:07:35.576752] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.990 [2024-04-18 17:07:35.585787] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.990 [2024-04-18 17:07:35.586290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.990 [2024-04-18 17:07:35.586466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.990 [2024-04-18 17:07:35.586495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:19.990 [2024-04-18 17:07:35.586512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:19.990 [2024-04-18 17:07:35.586749] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:19.990 [2024-04-18 17:07:35.586996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.990 [2024-04-18 17:07:35.587021] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.990 [2024-04-18 17:07:35.587038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.990 [2024-04-18 17:07:35.590600] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.991 [2024-04-18 17:07:35.599629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.991 [2024-04-18 17:07:35.600043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.600298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.600348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:19.991 [2024-04-18 17:07:35.600366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:19.991 [2024-04-18 17:07:35.600614] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:19.991 [2024-04-18 17:07:35.600855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.991 [2024-04-18 17:07:35.600880] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.991 [2024-04-18 17:07:35.600896] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.991 [2024-04-18 17:07:35.604454] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.991 [2024-04-18 17:07:35.613474] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.991 [2024-04-18 17:07:35.613887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.614080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.614105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:19.991 [2024-04-18 17:07:35.614120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:19.991 [2024-04-18 17:07:35.614396] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:19.991 [2024-04-18 17:07:35.614641] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.991 [2024-04-18 17:07:35.614667] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.991 [2024-04-18 17:07:35.614683] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.991 [2024-04-18 17:07:35.618232] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.991 [2024-04-18 17:07:35.627461] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.991 [2024-04-18 17:07:35.627869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.628073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.628143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:19.991 [2024-04-18 17:07:35.628162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:19.991 [2024-04-18 17:07:35.628414] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:19.991 [2024-04-18 17:07:35.628658] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.991 [2024-04-18 17:07:35.628689] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.991 [2024-04-18 17:07:35.628706] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.991 [2024-04-18 17:07:35.632258] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.991 [2024-04-18 17:07:35.641276] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.991 [2024-04-18 17:07:35.641719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.641862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.641891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:19.991 [2024-04-18 17:07:35.641909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:19.991 [2024-04-18 17:07:35.642148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:19.991 [2024-04-18 17:07:35.642401] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.991 [2024-04-18 17:07:35.642427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.991 [2024-04-18 17:07:35.642443] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.991 [2024-04-18 17:07:35.645995] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.991 [2024-04-18 17:07:35.655226] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.991 [2024-04-18 17:07:35.655642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.655877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.655929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:19.991 [2024-04-18 17:07:35.655947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:19.991 [2024-04-18 17:07:35.656185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:19.991 [2024-04-18 17:07:35.656443] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.991 [2024-04-18 17:07:35.656469] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.991 [2024-04-18 17:07:35.656485] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.991 [2024-04-18 17:07:35.660037] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.991 [2024-04-18 17:07:35.669052] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.991 [2024-04-18 17:07:35.669438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.669585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.669615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:19.991 [2024-04-18 17:07:35.669633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:19.991 [2024-04-18 17:07:35.669871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:19.991 [2024-04-18 17:07:35.670113] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.991 [2024-04-18 17:07:35.670138] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.991 [2024-04-18 17:07:35.670160] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.991 [2024-04-18 17:07:35.673724] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.991 [2024-04-18 17:07:35.682952] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.991 [2024-04-18 17:07:35.683497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.683703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.991 [2024-04-18 17:07:35.683730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:19.991 [2024-04-18 17:07:35.683748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:19.991 [2024-04-18 17:07:35.683987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:19.991 [2024-04-18 17:07:35.684230] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.991 [2024-04-18 17:07:35.684253] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.991 [2024-04-18 17:07:35.684268] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.991 [2024-04-18 17:07:35.687836] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.253 [2024-04-18 17:07:35.696871] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.253 [2024-04-18 17:07:35.697286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.253 [2024-04-18 17:07:35.697419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.253 [2024-04-18 17:07:35.697449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.253 [2024-04-18 17:07:35.697467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.253 [2024-04-18 17:07:35.697706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.253 [2024-04-18 17:07:35.697947] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.253 [2024-04-18 17:07:35.697971] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.254 [2024-04-18 17:07:35.697987] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.254 [2024-04-18 17:07:35.701556] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.254 [2024-04-18 17:07:35.710800] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.254 [2024-04-18 17:07:35.711211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.711401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.711430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.254 [2024-04-18 17:07:35.711448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.254 [2024-04-18 17:07:35.711685] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.254 [2024-04-18 17:07:35.711926] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.254 [2024-04-18 17:07:35.711951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.254 [2024-04-18 17:07:35.711967] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.254 [2024-04-18 17:07:35.715539] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.254 [2024-04-18 17:07:35.724774] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.254 [2024-04-18 17:07:35.725184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.725373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.725408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.254 [2024-04-18 17:07:35.725424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.254 [2024-04-18 17:07:35.725677] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.254 [2024-04-18 17:07:35.725920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.254 [2024-04-18 17:07:35.725945] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.254 [2024-04-18 17:07:35.725961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.254 [2024-04-18 17:07:35.729518] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.254 [2024-04-18 17:07:35.738745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.254 [2024-04-18 17:07:35.739137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.739284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.739309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.254 [2024-04-18 17:07:35.739325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.254 [2024-04-18 17:07:35.739574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.254 [2024-04-18 17:07:35.739826] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.254 [2024-04-18 17:07:35.739851] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.254 [2024-04-18 17:07:35.739866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.254 [2024-04-18 17:07:35.743398] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.254 [2024-04-18 17:07:35.752601] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.254 [2024-04-18 17:07:35.753079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.753222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.753247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.254 [2024-04-18 17:07:35.753263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.254 [2024-04-18 17:07:35.753523] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.254 [2024-04-18 17:07:35.753766] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.254 [2024-04-18 17:07:35.753791] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.254 [2024-04-18 17:07:35.753807] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.254 [2024-04-18 17:07:35.757371] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.254 [2024-04-18 17:07:35.766456] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.254 [2024-04-18 17:07:35.766895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.767085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.767111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.254 [2024-04-18 17:07:35.767127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.254 [2024-04-18 17:07:35.767396] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.254 [2024-04-18 17:07:35.767649] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.254 [2024-04-18 17:07:35.767673] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.254 [2024-04-18 17:07:35.767689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.254 [2024-04-18 17:07:35.771248] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.254 [2024-04-18 17:07:35.780292] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.254 [2024-04-18 17:07:35.780669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.780833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.780880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.254 [2024-04-18 17:07:35.780898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.254 [2024-04-18 17:07:35.781135] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.254 [2024-04-18 17:07:35.781378] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.254 [2024-04-18 17:07:35.781416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.254 [2024-04-18 17:07:35.781432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.254 [2024-04-18 17:07:35.784988] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.254 [2024-04-18 17:07:35.794273] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.254 [2024-04-18 17:07:35.794663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.794843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.794869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.254 [2024-04-18 17:07:35.794886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.254 [2024-04-18 17:07:35.795139] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.254 [2024-04-18 17:07:35.795391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.254 [2024-04-18 17:07:35.795427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.254 [2024-04-18 17:07:35.795442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.254 [2024-04-18 17:07:35.798942] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.254 [2024-04-18 17:07:35.807767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.254 [2024-04-18 17:07:35.808105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.808264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.808290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.254 [2024-04-18 17:07:35.808306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.254 [2024-04-18 17:07:35.808575] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.254 [2024-04-18 17:07:35.808787] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.254 [2024-04-18 17:07:35.808808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.254 [2024-04-18 17:07:35.808822] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.254 [2024-04-18 17:07:35.811921] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.254 [2024-04-18 17:07:35.821199] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.254 [2024-04-18 17:07:35.821615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.821779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.254 [2024-04-18 17:07:35.821804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.254 [2024-04-18 17:07:35.821820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.254 [2024-04-18 17:07:35.822033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.254 [2024-04-18 17:07:35.822274] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.254 [2024-04-18 17:07:35.822296] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.254 [2024-04-18 17:07:35.822309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.254 [2024-04-18 17:07:35.825794] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.254 [2024-04-18 17:07:35.834929] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.254 [2024-04-18 17:07:35.835361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.835537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.835564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.255 [2024-04-18 17:07:35.835580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.255 [2024-04-18 17:07:35.835833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.255 [2024-04-18 17:07:35.836063] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.255 [2024-04-18 17:07:35.836086] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.255 [2024-04-18 17:07:35.836100] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.255 [2024-04-18 17:07:35.839503] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.255 [2024-04-18 17:07:35.848282] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.255 [2024-04-18 17:07:35.848662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.848807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.848838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.255 [2024-04-18 17:07:35.848855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.255 [2024-04-18 17:07:35.849097] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.255 [2024-04-18 17:07:35.849311] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.255 [2024-04-18 17:07:35.849332] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.255 [2024-04-18 17:07:35.849345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.255 [2024-04-18 17:07:35.852435] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.255 [2024-04-18 17:07:35.861553] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.255 [2024-04-18 17:07:35.862025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.862197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.862223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.255 [2024-04-18 17:07:35.862240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.255 [2024-04-18 17:07:35.862494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.255 [2024-04-18 17:07:35.862733] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.255 [2024-04-18 17:07:35.862753] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.255 [2024-04-18 17:07:35.862766] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.255 [2024-04-18 17:07:35.865728] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.255 [2024-04-18 17:07:35.874843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.255 [2024-04-18 17:07:35.875212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.875386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.875413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.255 [2024-04-18 17:07:35.875430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.255 [2024-04-18 17:07:35.875683] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.255 [2024-04-18 17:07:35.875893] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.255 [2024-04-18 17:07:35.875913] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.255 [2024-04-18 17:07:35.875925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.255 [2024-04-18 17:07:35.878903] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.255 [2024-04-18 17:07:35.888141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.255 [2024-04-18 17:07:35.888550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.888701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.888727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.255 [2024-04-18 17:07:35.888749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.255 [2024-04-18 17:07:35.889001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.255 [2024-04-18 17:07:35.889209] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.255 [2024-04-18 17:07:35.889229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.255 [2024-04-18 17:07:35.889241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.255 [2024-04-18 17:07:35.892221] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.255 [2024-04-18 17:07:35.901379] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.255 [2024-04-18 17:07:35.901770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.901915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.901941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.255 [2024-04-18 17:07:35.901958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.255 [2024-04-18 17:07:35.902207] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.255 [2024-04-18 17:07:35.902424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.255 [2024-04-18 17:07:35.902445] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.255 [2024-04-18 17:07:35.902458] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.255 [2024-04-18 17:07:35.905398] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.255 [2024-04-18 17:07:35.914712] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.255 [2024-04-18 17:07:35.915081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.915197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.915224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.255 [2024-04-18 17:07:35.915240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.255 [2024-04-18 17:07:35.915503] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.255 [2024-04-18 17:07:35.915717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.255 [2024-04-18 17:07:35.915738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.255 [2024-04-18 17:07:35.915751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.255 [2024-04-18 17:07:35.918730] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.255 [2024-04-18 17:07:35.928036] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.255 [2024-04-18 17:07:35.928472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.928615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.928640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.255 [2024-04-18 17:07:35.928655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.255 [2024-04-18 17:07:35.928921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.255 [2024-04-18 17:07:35.929114] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.255 [2024-04-18 17:07:35.929134] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.255 [2024-04-18 17:07:35.929146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.255 [2024-04-18 17:07:35.932128] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.255 [2024-04-18 17:07:35.941210] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.255 [2024-04-18 17:07:35.941581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.941750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.941776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.255 [2024-04-18 17:07:35.941791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.255 [2024-04-18 17:07:35.942044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.255 [2024-04-18 17:07:35.942237] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.255 [2024-04-18 17:07:35.942257] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.255 [2024-04-18 17:07:35.942269] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.255 [2024-04-18 17:07:35.945208] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.255 [2024-04-18 17:07:35.954547] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.255 [2024-04-18 17:07:35.954933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.955082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.255 [2024-04-18 17:07:35.955110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.255 [2024-04-18 17:07:35.955126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.255 [2024-04-18 17:07:35.955367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.255 [2024-04-18 17:07:35.955624] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.256 [2024-04-18 17:07:35.955645] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.256 [2024-04-18 17:07:35.955659] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.518 [2024-04-18 17:07:35.958760] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.518 [2024-04-18 17:07:35.967778] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.518 [2024-04-18 17:07:35.968208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:35.968355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:35.968389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.518 [2024-04-18 17:07:35.968408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.518 [2024-04-18 17:07:35.968637] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.518 [2024-04-18 17:07:35.968870] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.518 [2024-04-18 17:07:35.968891] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.518 [2024-04-18 17:07:35.968904] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.518 [2024-04-18 17:07:35.971895] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.518 [2024-04-18 17:07:35.981113] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.518 [2024-04-18 17:07:35.981514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:35.981656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:35.981681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.518 [2024-04-18 17:07:35.981696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.518 [2024-04-18 17:07:35.981949] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.518 [2024-04-18 17:07:35.982157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.518 [2024-04-18 17:07:35.982177] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.518 [2024-04-18 17:07:35.982190] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.518 [2024-04-18 17:07:35.985162] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.518 [2024-04-18 17:07:35.994272] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.518 [2024-04-18 17:07:35.994742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:35.994860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:35.994885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.518 [2024-04-18 17:07:35.994901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.518 [2024-04-18 17:07:35.995136] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.518 [2024-04-18 17:07:35.995344] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.518 [2024-04-18 17:07:35.995389] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.518 [2024-04-18 17:07:35.995405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.518 [2024-04-18 17:07:35.998366] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.518 [2024-04-18 17:07:36.007614] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.518 [2024-04-18 17:07:36.008054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:36.008203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:36.008228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.518 [2024-04-18 17:07:36.008244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.518 [2024-04-18 17:07:36.008511] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.518 [2024-04-18 17:07:36.008731] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.518 [2024-04-18 17:07:36.008756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.518 [2024-04-18 17:07:36.008770] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.518 [2024-04-18 17:07:36.011709] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.518 [2024-04-18 17:07:36.020779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.518 [2024-04-18 17:07:36.021153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:36.021269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:36.021294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.518 [2024-04-18 17:07:36.021310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.518 [2024-04-18 17:07:36.021548] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.518 [2024-04-18 17:07:36.021797] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.518 [2024-04-18 17:07:36.021819] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.518 [2024-04-18 17:07:36.021832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.518 [2024-04-18 17:07:36.024772] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.518 [2024-04-18 17:07:36.033985] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.518 [2024-04-18 17:07:36.034351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:36.034535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:36.034563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.518 [2024-04-18 17:07:36.034579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.518 [2024-04-18 17:07:36.034818] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.518 [2024-04-18 17:07:36.035028] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.518 [2024-04-18 17:07:36.035049] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.518 [2024-04-18 17:07:36.035063] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.518 [2024-04-18 17:07:36.038038] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.518 [2024-04-18 17:07:36.047255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.518 [2024-04-18 17:07:36.047650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:36.047789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:36.047814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.518 [2024-04-18 17:07:36.047830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.518 [2024-04-18 17:07:36.048093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.518 [2024-04-18 17:07:36.048286] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.518 [2024-04-18 17:07:36.048306] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.518 [2024-04-18 17:07:36.048324] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.518 [2024-04-18 17:07:36.051313] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.518 [2024-04-18 17:07:36.060544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.518 [2024-04-18 17:07:36.060940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:36.061065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:36.061090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.518 [2024-04-18 17:07:36.061106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.518 [2024-04-18 17:07:36.061360] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.518 [2024-04-18 17:07:36.061584] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.518 [2024-04-18 17:07:36.061607] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.518 [2024-04-18 17:07:36.061621] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.518 [2024-04-18 17:07:36.064580] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.518 [2024-04-18 17:07:36.073776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.518 [2024-04-18 17:07:36.074129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:36.074278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.518 [2024-04-18 17:07:36.074305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.518 [2024-04-18 17:07:36.074321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.518 [2024-04-18 17:07:36.074546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.519 [2024-04-18 17:07:36.074786] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.519 [2024-04-18 17:07:36.074808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.519 [2024-04-18 17:07:36.074821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.519 [2024-04-18 17:07:36.078215] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.519 [2024-04-18 17:07:36.087162] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.519 [2024-04-18 17:07:36.087522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.087656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.087683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.519 [2024-04-18 17:07:36.087699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.519 [2024-04-18 17:07:36.087950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.519 [2024-04-18 17:07:36.088144] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.519 [2024-04-18 17:07:36.088164] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.519 [2024-04-18 17:07:36.088177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.519 [2024-04-18 17:07:36.091218] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.519 [2024-04-18 17:07:36.100458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.519 [2024-04-18 17:07:36.100854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.100997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.101021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.519 [2024-04-18 17:07:36.101037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.519 [2024-04-18 17:07:36.101275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.519 [2024-04-18 17:07:36.101529] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.519 [2024-04-18 17:07:36.101552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.519 [2024-04-18 17:07:36.101566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.519 [2024-04-18 17:07:36.104540] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.519 [2024-04-18 17:07:36.113768] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.519 [2024-04-18 17:07:36.114201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.114349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.114375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.519 [2024-04-18 17:07:36.114401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.519 [2024-04-18 17:07:36.114631] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.519 [2024-04-18 17:07:36.114858] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.519 [2024-04-18 17:07:36.114878] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.519 [2024-04-18 17:07:36.114892] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.519 [2024-04-18 17:07:36.117831] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.519 [2024-04-18 17:07:36.127016] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.519 [2024-04-18 17:07:36.127450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.127594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.127619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.519 [2024-04-18 17:07:36.127635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.519 [2024-04-18 17:07:36.127878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.519 [2024-04-18 17:07:36.128085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.519 [2024-04-18 17:07:36.128106] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.519 [2024-04-18 17:07:36.128119] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.519 [2024-04-18 17:07:36.131097] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.519 [2024-04-18 17:07:36.140142] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.519 [2024-04-18 17:07:36.140488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.140662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.140688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.519 [2024-04-18 17:07:36.140705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.519 [2024-04-18 17:07:36.140943] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.519 [2024-04-18 17:07:36.141151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.519 [2024-04-18 17:07:36.141172] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.519 [2024-04-18 17:07:36.141185] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.519 [2024-04-18 17:07:36.144166] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.519 [2024-04-18 17:07:36.153370] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.519 [2024-04-18 17:07:36.153789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.153958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.153983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.519 [2024-04-18 17:07:36.153999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.519 [2024-04-18 17:07:36.154252] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.519 [2024-04-18 17:07:36.154492] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.519 [2024-04-18 17:07:36.154515] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.519 [2024-04-18 17:07:36.154528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.519 [2024-04-18 17:07:36.157484] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.519 [2024-04-18 17:07:36.166690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.519 [2024-04-18 17:07:36.167057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.167201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.167226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.519 [2024-04-18 17:07:36.167241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.519 [2024-04-18 17:07:36.167493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.519 [2024-04-18 17:07:36.167713] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.519 [2024-04-18 17:07:36.167733] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.519 [2024-04-18 17:07:36.167746] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.519 [2024-04-18 17:07:36.170684] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.519 [2024-04-18 17:07:36.179963] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.519 [2024-04-18 17:07:36.180338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.180513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.180539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.519 [2024-04-18 17:07:36.180555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.519 [2024-04-18 17:07:36.180794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.519 [2024-04-18 17:07:36.181004] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.519 [2024-04-18 17:07:36.181025] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.519 [2024-04-18 17:07:36.181038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.519 [2024-04-18 17:07:36.183977] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.519 [2024-04-18 17:07:36.193256] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.519 [2024-04-18 17:07:36.193608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.193780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.519 [2024-04-18 17:07:36.193807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.519 [2024-04-18 17:07:36.193823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.519 [2024-04-18 17:07:36.194079] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.519 [2024-04-18 17:07:36.194273] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.519 [2024-04-18 17:07:36.194293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.519 [2024-04-18 17:07:36.194306] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.519 [2024-04-18 17:07:36.197292] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.520 [2024-04-18 17:07:36.206481] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.520 [2024-04-18 17:07:36.206849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.520 [2024-04-18 17:07:36.206956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.520 [2024-04-18 17:07:36.206980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.520 [2024-04-18 17:07:36.206995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.520 [2024-04-18 17:07:36.207217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.520 [2024-04-18 17:07:36.207453] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.520 [2024-04-18 17:07:36.207476] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.520 [2024-04-18 17:07:36.207490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.520 [2024-04-18 17:07:36.210486] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.520 [2024-04-18 17:07:36.219852] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.520 [2024-04-18 17:07:36.220180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.520 [2024-04-18 17:07:36.220320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.520 [2024-04-18 17:07:36.220346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.520 [2024-04-18 17:07:36.220395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.520 [2024-04-18 17:07:36.220627] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.520 [2024-04-18 17:07:36.220875] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.520 [2024-04-18 17:07:36.220896] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.520 [2024-04-18 17:07:36.220909] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.782 [2024-04-18 17:07:36.223920] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.782 [2024-04-18 17:07:36.233138] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.782 [2024-04-18 17:07:36.233530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.782 [2024-04-18 17:07:36.233702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.782 [2024-04-18 17:07:36.233730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.782 [2024-04-18 17:07:36.233762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.782 [2024-04-18 17:07:36.234008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.782 [2024-04-18 17:07:36.234202] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.782 [2024-04-18 17:07:36.234222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.782 [2024-04-18 17:07:36.234235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.782 [2024-04-18 17:07:36.237257] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.782 [2024-04-18 17:07:36.246290] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.782 [2024-04-18 17:07:36.246674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.782 [2024-04-18 17:07:36.246805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.782 [2024-04-18 17:07:36.246831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.782 [2024-04-18 17:07:36.246848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.782 [2024-04-18 17:07:36.247086] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.782 [2024-04-18 17:07:36.247279] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.782 [2024-04-18 17:07:36.247299] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.782 [2024-04-18 17:07:36.247311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.782 [2024-04-18 17:07:36.250252] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.782 [2024-04-18 17:07:36.259528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.782 [2024-04-18 17:07:36.259981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.782 [2024-04-18 17:07:36.260149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.782 [2024-04-18 17:07:36.260173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.782 [2024-04-18 17:07:36.260189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.782 [2024-04-18 17:07:36.260453] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.782 [2024-04-18 17:07:36.260651] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.782 [2024-04-18 17:07:36.260682] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.782 [2024-04-18 17:07:36.260695] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.782 [2024-04-18 17:07:36.263643] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.782 [2024-04-18 17:07:36.272729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.782 [2024-04-18 17:07:36.273099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.782 [2024-04-18 17:07:36.273267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.782 [2024-04-18 17:07:36.273293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.782 [2024-04-18 17:07:36.273310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.782 [2024-04-18 17:07:36.273587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.782 [2024-04-18 17:07:36.273799] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.782 [2024-04-18 17:07:36.273819] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.782 [2024-04-18 17:07:36.273831] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.782 [2024-04-18 17:07:36.276796] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.782 [2024-04-18 17:07:36.285918] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.782 [2024-04-18 17:07:36.286303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.782 [2024-04-18 17:07:36.286445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.782 [2024-04-18 17:07:36.286472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.782 [2024-04-18 17:07:36.286489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.782 [2024-04-18 17:07:36.286730] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.782 [2024-04-18 17:07:36.286923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.782 [2024-04-18 17:07:36.286942] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.782 [2024-04-18 17:07:36.286955] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.782 [2024-04-18 17:07:36.289898] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.782 [2024-04-18 17:07:36.299155] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.782 [2024-04-18 17:07:36.299528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.782 [2024-04-18 17:07:36.299692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.782 [2024-04-18 17:07:36.299719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.782 [2024-04-18 17:07:36.299735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.782 [2024-04-18 17:07:36.299997] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.782 [2024-04-18 17:07:36.300195] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.782 [2024-04-18 17:07:36.300214] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.782 [2024-04-18 17:07:36.300227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.782 [2024-04-18 17:07:36.303241] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.783 [2024-04-18 17:07:36.312462] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.783 [2024-04-18 17:07:36.312793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.312953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.312978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.783 [2024-04-18 17:07:36.312994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.783 [2024-04-18 17:07:36.313234] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.783 [2024-04-18 17:07:36.313473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.783 [2024-04-18 17:07:36.313495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.783 [2024-04-18 17:07:36.313508] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.783 [2024-04-18 17:07:36.316473] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.783 [2024-04-18 17:07:36.325763] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.783 [2024-04-18 17:07:36.326154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.326295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.326322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.783 [2024-04-18 17:07:36.326339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.783 [2024-04-18 17:07:36.326575] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.783 [2024-04-18 17:07:36.326795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.783 [2024-04-18 17:07:36.326816] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.783 [2024-04-18 17:07:36.326829] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.783 [2024-04-18 17:07:36.330202] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.783 [2024-04-18 17:07:36.339122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.783 [2024-04-18 17:07:36.339493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.339634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.339661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.783 [2024-04-18 17:07:36.339677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.783 [2024-04-18 17:07:36.339919] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.783 [2024-04-18 17:07:36.340128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.783 [2024-04-18 17:07:36.340157] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.783 [2024-04-18 17:07:36.340170] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.783 [2024-04-18 17:07:36.343180] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.783 [2024-04-18 17:07:36.352466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.783 [2024-04-18 17:07:36.352923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.353064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.353091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.783 [2024-04-18 17:07:36.353107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.783 [2024-04-18 17:07:36.353371] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.783 [2024-04-18 17:07:36.353597] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.783 [2024-04-18 17:07:36.353620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.783 [2024-04-18 17:07:36.353633] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.783 [2024-04-18 17:07:36.356703] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.783 [2024-04-18 17:07:36.365709] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.783 [2024-04-18 17:07:36.366143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.366283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.366308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.783 [2024-04-18 17:07:36.366324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.783 [2024-04-18 17:07:36.366562] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.783 [2024-04-18 17:07:36.366790] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.783 [2024-04-18 17:07:36.366811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.783 [2024-04-18 17:07:36.366824] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.783 [2024-04-18 17:07:36.369814] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.783 [2024-04-18 17:07:36.378934] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.783 [2024-04-18 17:07:36.379368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.379539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.379566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.783 [2024-04-18 17:07:36.379582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.783 [2024-04-18 17:07:36.379836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.783 [2024-04-18 17:07:36.380029] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.783 [2024-04-18 17:07:36.380049] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.783 [2024-04-18 17:07:36.380067] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.783 [2024-04-18 17:07:36.383045] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.783 [2024-04-18 17:07:36.392164] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.783 [2024-04-18 17:07:36.392533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.392646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.392672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.783 [2024-04-18 17:07:36.392689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.783 [2024-04-18 17:07:36.392941] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.783 [2024-04-18 17:07:36.393134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.783 [2024-04-18 17:07:36.393153] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.783 [2024-04-18 17:07:36.393166] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.783 [2024-04-18 17:07:36.396147] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.783 [2024-04-18 17:07:36.405453] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.783 [2024-04-18 17:07:36.405887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.406025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.406051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.783 [2024-04-18 17:07:36.406068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.783 [2024-04-18 17:07:36.406319] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.783 [2024-04-18 17:07:36.406541] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.783 [2024-04-18 17:07:36.406562] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.783 [2024-04-18 17:07:36.406575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.783 [2024-04-18 17:07:36.409549] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.783 [2024-04-18 17:07:36.418656] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.783 [2024-04-18 17:07:36.419035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.419179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.419213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.783 [2024-04-18 17:07:36.419230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.783 [2024-04-18 17:07:36.419483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.783 [2024-04-18 17:07:36.419698] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.783 [2024-04-18 17:07:36.419719] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.783 [2024-04-18 17:07:36.419732] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.783 [2024-04-18 17:07:36.422759] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.783 [2024-04-18 17:07:36.432022] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.783 [2024-04-18 17:07:36.432424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.432568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.783 [2024-04-18 17:07:36.432596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.783 [2024-04-18 17:07:36.432612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.784 [2024-04-18 17:07:36.432861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.784 [2024-04-18 17:07:36.433055] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.784 [2024-04-18 17:07:36.433076] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.784 [2024-04-18 17:07:36.433089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.784 [2024-04-18 17:07:36.436185] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.784 [2024-04-18 17:07:36.445293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.784 [2024-04-18 17:07:36.445664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.784 [2024-04-18 17:07:36.445804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.784 [2024-04-18 17:07:36.445830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.784 [2024-04-18 17:07:36.445847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.784 [2024-04-18 17:07:36.446081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.784 [2024-04-18 17:07:36.446289] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.784 [2024-04-18 17:07:36.446310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.784 [2024-04-18 17:07:36.446323] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.784 [2024-04-18 17:07:36.449313] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.784 [2024-04-18 17:07:36.458667] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.784 [2024-04-18 17:07:36.459054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.784 [2024-04-18 17:07:36.459226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.784 [2024-04-18 17:07:36.459252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.784 [2024-04-18 17:07:36.459267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.784 [2024-04-18 17:07:36.459532] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.784 [2024-04-18 17:07:36.459745] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.784 [2024-04-18 17:07:36.459766] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.784 [2024-04-18 17:07:36.459778] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.784 [2024-04-18 17:07:36.462749] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.784 [2024-04-18 17:07:36.471969] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.784 [2024-04-18 17:07:36.472289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.784 [2024-04-18 17:07:36.472407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.784 [2024-04-18 17:07:36.472448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.784 [2024-04-18 17:07:36.472464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.784 [2024-04-18 17:07:36.472703] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.784 [2024-04-18 17:07:36.472911] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.784 [2024-04-18 17:07:36.472932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.784 [2024-04-18 17:07:36.472945] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.784 [2024-04-18 17:07:36.475957] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.784 [2024-04-18 17:07:36.485475] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.784 [2024-04-18 17:07:36.485849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.784 [2024-04-18 17:07:36.485975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.784 [2024-04-18 17:07:36.486013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:20.784 [2024-04-18 17:07:36.486030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:20.784 [2024-04-18 17:07:36.486264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:20.784 [2024-04-18 17:07:36.486525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.784 [2024-04-18 17:07:36.486548] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.046 [2024-04-18 17:07:36.486562] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.046 [2024-04-18 17:07:36.489591] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.046 [2024-04-18 17:07:36.498773] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.047 [2024-04-18 17:07:36.499176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.499315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.499342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.047 [2024-04-18 17:07:36.499359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.047 [2024-04-18 17:07:36.499611] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.047 [2024-04-18 17:07:36.499841] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.047 [2024-04-18 17:07:36.499863] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.047 [2024-04-18 17:07:36.499875] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.047 [2024-04-18 17:07:36.502815] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.047 [2024-04-18 17:07:36.512023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.047 [2024-04-18 17:07:36.512408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.512558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.512583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.047 [2024-04-18 17:07:36.512599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.047 [2024-04-18 17:07:36.512853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.047 [2024-04-18 17:07:36.513045] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.047 [2024-04-18 17:07:36.513066] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.047 [2024-04-18 17:07:36.513078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.047 [2024-04-18 17:07:36.516055] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.047 [2024-04-18 17:07:36.525295] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.047 [2024-04-18 17:07:36.525764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.525929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.525956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.047 [2024-04-18 17:07:36.525972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.047 [2024-04-18 17:07:36.526226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.047 [2024-04-18 17:07:36.526465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.047 [2024-04-18 17:07:36.526487] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.047 [2024-04-18 17:07:36.526501] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.047 [2024-04-18 17:07:36.529457] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.047 [2024-04-18 17:07:36.539124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.047 [2024-04-18 17:07:36.539524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.539725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.539775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.047 [2024-04-18 17:07:36.539794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.047 [2024-04-18 17:07:36.540033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.047 [2024-04-18 17:07:36.540275] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.047 [2024-04-18 17:07:36.540300] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.047 [2024-04-18 17:07:36.540316] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.047 [2024-04-18 17:07:36.543872] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.047 [2024-04-18 17:07:36.553103] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.047 [2024-04-18 17:07:36.553532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.553665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.553702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.047 [2024-04-18 17:07:36.553721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.047 [2024-04-18 17:07:36.553960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.047 [2024-04-18 17:07:36.554203] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.047 [2024-04-18 17:07:36.554227] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.047 [2024-04-18 17:07:36.554243] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.047 [2024-04-18 17:07:36.557799] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.047 [2024-04-18 17:07:36.567023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.047 [2024-04-18 17:07:36.567439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.567591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.567618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.047 [2024-04-18 17:07:36.567635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.047 [2024-04-18 17:07:36.567873] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.047 [2024-04-18 17:07:36.568113] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.047 [2024-04-18 17:07:36.568138] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.047 [2024-04-18 17:07:36.568155] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.047 [2024-04-18 17:07:36.571712] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.047 [2024-04-18 17:07:36.580943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.047 [2024-04-18 17:07:36.581433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.581623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.581653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.047 [2024-04-18 17:07:36.581671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.047 [2024-04-18 17:07:36.581908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.047 [2024-04-18 17:07:36.582151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.047 [2024-04-18 17:07:36.582176] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.047 [2024-04-18 17:07:36.582192] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.047 [2024-04-18 17:07:36.585758] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.047 [2024-04-18 17:07:36.594837] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.047 [2024-04-18 17:07:36.595250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.595404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.595433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.047 [2024-04-18 17:07:36.595456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.047 [2024-04-18 17:07:36.595695] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.047 [2024-04-18 17:07:36.595936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.047 [2024-04-18 17:07:36.595961] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.047 [2024-04-18 17:07:36.595978] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.047 [2024-04-18 17:07:36.599779] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.047 [2024-04-18 17:07:36.608796] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.047 [2024-04-18 17:07:36.609229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.609392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.609421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.047 [2024-04-18 17:07:36.609453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.047 [2024-04-18 17:07:36.609682] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.047 [2024-04-18 17:07:36.609942] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.047 [2024-04-18 17:07:36.609967] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.047 [2024-04-18 17:07:36.609983] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.047 [2024-04-18 17:07:36.613546] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.047 [2024-04-18 17:07:36.622781] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.047 [2024-04-18 17:07:36.623177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.623324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.047 [2024-04-18 17:07:36.623349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.047 [2024-04-18 17:07:36.623365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.047 [2024-04-18 17:07:36.623628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.048 [2024-04-18 17:07:36.623870] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.048 [2024-04-18 17:07:36.623895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.048 [2024-04-18 17:07:36.623911] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.048 [2024-04-18 17:07:36.627473] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.048 [2024-04-18 17:07:36.636715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.048 [2024-04-18 17:07:36.637124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.637247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.637274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.048 [2024-04-18 17:07:36.637292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.048 [2024-04-18 17:07:36.637550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.048 [2024-04-18 17:07:36.637792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.048 [2024-04-18 17:07:36.637817] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.048 [2024-04-18 17:07:36.637833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.048 [2024-04-18 17:07:36.641389] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.048 [2024-04-18 17:07:36.650613] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.048 [2024-04-18 17:07:36.651021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.651171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.651198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.048 [2024-04-18 17:07:36.651216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.048 [2024-04-18 17:07:36.651468] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.048 [2024-04-18 17:07:36.651710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.048 [2024-04-18 17:07:36.651734] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.048 [2024-04-18 17:07:36.651749] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.048 [2024-04-18 17:07:36.655302] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.048 [2024-04-18 17:07:36.664535] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.048 [2024-04-18 17:07:36.664919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.665065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.665093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.048 [2024-04-18 17:07:36.665110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.048 [2024-04-18 17:07:36.665347] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.048 [2024-04-18 17:07:36.665601] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.048 [2024-04-18 17:07:36.665627] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.048 [2024-04-18 17:07:36.665642] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.048 [2024-04-18 17:07:36.669192] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.048 [2024-04-18 17:07:36.678424] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.048 [2024-04-18 17:07:36.678828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.678984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.679012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.048 [2024-04-18 17:07:36.679029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.048 [2024-04-18 17:07:36.679266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.048 [2024-04-18 17:07:36.679526] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.048 [2024-04-18 17:07:36.679553] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.048 [2024-04-18 17:07:36.679569] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.048 [2024-04-18 17:07:36.683121] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.048 [2024-04-18 17:07:36.692377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.048 [2024-04-18 17:07:36.692770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.692912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.692936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.048 [2024-04-18 17:07:36.692952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.048 [2024-04-18 17:07:36.693193] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.048 [2024-04-18 17:07:36.693448] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.048 [2024-04-18 17:07:36.693473] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.048 [2024-04-18 17:07:36.693489] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.048 [2024-04-18 17:07:36.697043] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.048 [2024-04-18 17:07:36.706258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.048 [2024-04-18 17:07:36.706750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.706876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.706901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.048 [2024-04-18 17:07:36.706932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.048 [2024-04-18 17:07:36.707179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.048 [2024-04-18 17:07:36.707435] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.048 [2024-04-18 17:07:36.707461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.048 [2024-04-18 17:07:36.707477] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.048 [2024-04-18 17:07:36.711030] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.048 [2024-04-18 17:07:36.720255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.048 [2024-04-18 17:07:36.720659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.720846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.720874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.048 [2024-04-18 17:07:36.720891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.048 [2024-04-18 17:07:36.721129] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.048 [2024-04-18 17:07:36.721371] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.048 [2024-04-18 17:07:36.721416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.048 [2024-04-18 17:07:36.721433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.048 [2024-04-18 17:07:36.724988] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.048 [2024-04-18 17:07:36.734214] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.048 [2024-04-18 17:07:36.734740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.735002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.735052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.048 [2024-04-18 17:07:36.735070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.048 [2024-04-18 17:07:36.735308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.048 [2024-04-18 17:07:36.735565] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.048 [2024-04-18 17:07:36.735591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.048 [2024-04-18 17:07:36.735607] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.048 [2024-04-18 17:07:36.739158] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.048 [2024-04-18 17:07:36.748181] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.048 [2024-04-18 17:07:36.748600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.748813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.048 [2024-04-18 17:07:36.748863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.048 [2024-04-18 17:07:36.748881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.048 [2024-04-18 17:07:36.749120] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.048 [2024-04-18 17:07:36.749363] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.048 [2024-04-18 17:07:36.749401] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.048 [2024-04-18 17:07:36.749419] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.309 [2024-04-18 17:07:36.752978] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.309 [2024-04-18 17:07:36.762004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.309 [2024-04-18 17:07:36.762418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.309 [2024-04-18 17:07:36.762660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.309 [2024-04-18 17:07:36.762689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.309 [2024-04-18 17:07:36.762708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.309 [2024-04-18 17:07:36.762946] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.309 [2024-04-18 17:07:36.763188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.309 [2024-04-18 17:07:36.763213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.309 [2024-04-18 17:07:36.763235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.309 [2024-04-18 17:07:36.766804] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.309 [2024-04-18 17:07:36.775827] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.309 [2024-04-18 17:07:36.776235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.309 [2024-04-18 17:07:36.776394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.309 [2024-04-18 17:07:36.776424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.309 [2024-04-18 17:07:36.776443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.309 [2024-04-18 17:07:36.776680] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.309 [2024-04-18 17:07:36.776923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.309 [2024-04-18 17:07:36.776948] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.309 [2024-04-18 17:07:36.776964] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.309 [2024-04-18 17:07:36.780526] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.309 [2024-04-18 17:07:36.789763] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.309 [2024-04-18 17:07:36.790178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.309 [2024-04-18 17:07:36.790334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.309 [2024-04-18 17:07:36.790363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.309 [2024-04-18 17:07:36.790392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.309 [2024-04-18 17:07:36.790633] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.309 [2024-04-18 17:07:36.790876] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.309 [2024-04-18 17:07:36.790901] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.309 [2024-04-18 17:07:36.790917] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.309 [2024-04-18 17:07:36.794476] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.309 [2024-04-18 17:07:36.803751] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.309 [2024-04-18 17:07:36.804164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.309 [2024-04-18 17:07:36.804337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.309 [2024-04-18 17:07:36.804367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.309 [2024-04-18 17:07:36.804396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.309 [2024-04-18 17:07:36.804637] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.310 [2024-04-18 17:07:36.804880] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.310 [2024-04-18 17:07:36.804905] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.310 [2024-04-18 17:07:36.804921] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.310 [2024-04-18 17:07:36.808480] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.310 [2024-04-18 17:07:36.817723] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.310 [2024-04-18 17:07:36.818208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.818461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.818490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.310 [2024-04-18 17:07:36.818509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.310 [2024-04-18 17:07:36.818746] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.310 [2024-04-18 17:07:36.818988] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.310 [2024-04-18 17:07:36.819013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.310 [2024-04-18 17:07:36.819029] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.310 [2024-04-18 17:07:36.822589] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.310 [2024-04-18 17:07:36.831608] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.310 [2024-04-18 17:07:36.832004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.832151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.832177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.310 [2024-04-18 17:07:36.832193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.310 [2024-04-18 17:07:36.832456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.310 [2024-04-18 17:07:36.832699] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.310 [2024-04-18 17:07:36.832724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.310 [2024-04-18 17:07:36.832740] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.310 [2024-04-18 17:07:36.836293] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.310 [2024-04-18 17:07:36.845533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.310 [2024-04-18 17:07:36.845917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.846071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.846100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.310 [2024-04-18 17:07:36.846117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.310 [2024-04-18 17:07:36.846355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.310 [2024-04-18 17:07:36.846606] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.310 [2024-04-18 17:07:36.846633] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.310 [2024-04-18 17:07:36.846649] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.310 [2024-04-18 17:07:36.850203] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.310 [2024-04-18 17:07:36.859440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.310 [2024-04-18 17:07:36.859959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.860189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.860254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.310 [2024-04-18 17:07:36.860272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.310 [2024-04-18 17:07:36.860523] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.310 [2024-04-18 17:07:36.860767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.310 [2024-04-18 17:07:36.860793] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.310 [2024-04-18 17:07:36.860809] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.310 [2024-04-18 17:07:36.864367] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.310 [2024-04-18 17:07:36.873387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.310 [2024-04-18 17:07:36.873797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.873940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.873967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.310 [2024-04-18 17:07:36.873984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.310 [2024-04-18 17:07:36.874221] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.310 [2024-04-18 17:07:36.874476] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.310 [2024-04-18 17:07:36.874502] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.310 [2024-04-18 17:07:36.874518] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.310 [2024-04-18 17:07:36.878068] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.310 [2024-04-18 17:07:36.887351] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.310 [2024-04-18 17:07:36.887834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.887977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.888001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.310 [2024-04-18 17:07:36.888018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.310 [2024-04-18 17:07:36.888270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.310 [2024-04-18 17:07:36.888525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.310 [2024-04-18 17:07:36.888551] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.310 [2024-04-18 17:07:36.888568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.310 [2024-04-18 17:07:36.892121] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.310 [2024-04-18 17:07:36.901350] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.310 [2024-04-18 17:07:36.901766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.902063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.902118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.310 [2024-04-18 17:07:36.902135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.310 [2024-04-18 17:07:36.902374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.310 [2024-04-18 17:07:36.902629] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.310 [2024-04-18 17:07:36.902654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.310 [2024-04-18 17:07:36.902670] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.310 [2024-04-18 17:07:36.906224] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.310 [2024-04-18 17:07:36.915260] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.310 [2024-04-18 17:07:36.915668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.915838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.915864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.310 [2024-04-18 17:07:36.915895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.310 [2024-04-18 17:07:36.916121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.310 [2024-04-18 17:07:36.916374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.310 [2024-04-18 17:07:36.916423] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.310 [2024-04-18 17:07:36.916440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.310 [2024-04-18 17:07:36.919996] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.310 [2024-04-18 17:07:36.929238] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.310 [2024-04-18 17:07:36.929630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.929795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.310 [2024-04-18 17:07:36.929822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.310 [2024-04-18 17:07:36.929839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.310 [2024-04-18 17:07:36.930077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.310 [2024-04-18 17:07:36.930320] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.311 [2024-04-18 17:07:36.930346] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.311 [2024-04-18 17:07:36.930362] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.311 [2024-04-18 17:07:36.933936] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.311 [2024-04-18 17:07:36.943185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.311 [2024-04-18 17:07:36.943615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.311 [2024-04-18 17:07:36.943743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.311 [2024-04-18 17:07:36.943772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.311 [2024-04-18 17:07:36.943795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.311 [2024-04-18 17:07:36.944033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.311 [2024-04-18 17:07:36.944274] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.311 [2024-04-18 17:07:36.944299] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.311 [2024-04-18 17:07:36.944315] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.311 [2024-04-18 17:07:36.947915] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.311 [2024-04-18 17:07:36.957168] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.311 [2024-04-18 17:07:36.957570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.311 [2024-04-18 17:07:36.957752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.311 [2024-04-18 17:07:36.957781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.311 [2024-04-18 17:07:36.957808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.311 [2024-04-18 17:07:36.958045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.311 [2024-04-18 17:07:36.958288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.311 [2024-04-18 17:07:36.958311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.311 [2024-04-18 17:07:36.958327] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.311 [2024-04-18 17:07:36.961894] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.311 [2024-04-18 17:07:36.971148] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.311 [2024-04-18 17:07:36.971581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.311 [2024-04-18 17:07:36.971805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.311 [2024-04-18 17:07:36.971834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.311 [2024-04-18 17:07:36.971852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.311 [2024-04-18 17:07:36.972089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.311 [2024-04-18 17:07:36.972330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.311 [2024-04-18 17:07:36.972354] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.311 [2024-04-18 17:07:36.972370] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.311 [2024-04-18 17:07:36.975938] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.311 [2024-04-18 17:07:36.984983] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.311 [2024-04-18 17:07:36.985472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.311 [2024-04-18 17:07:36.985630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.311 [2024-04-18 17:07:36.985655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.311 [2024-04-18 17:07:36.985672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.311 [2024-04-18 17:07:36.985940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.311 [2024-04-18 17:07:36.986183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.311 [2024-04-18 17:07:36.986208] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.311 [2024-04-18 17:07:36.986224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.311 [2024-04-18 17:07:36.989791] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.311 [2024-04-18 17:07:36.998841] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.311 [2024-04-18 17:07:36.999339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.311 [2024-04-18 17:07:36.999507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.311 [2024-04-18 17:07:36.999537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.311 [2024-04-18 17:07:36.999554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.311 [2024-04-18 17:07:36.999791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.311 [2024-04-18 17:07:37.000033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.311 [2024-04-18 17:07:37.000057] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.311 [2024-04-18 17:07:37.000073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.311 [2024-04-18 17:07:37.003639] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.311 [2024-04-18 17:07:37.012853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.311 [2024-04-18 17:07:37.013282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.311 [2024-04-18 17:07:37.013445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.311 [2024-04-18 17:07:37.013472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.311 [2024-04-18 17:07:37.013489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.311 [2024-04-18 17:07:37.013726] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.311 [2024-04-18 17:07:37.013984] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.311 [2024-04-18 17:07:37.014009] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.311 [2024-04-18 17:07:37.014024] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.573 [2024-04-18 17:07:37.017599] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.573 [2024-04-18 17:07:37.026846] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.573 [2024-04-18 17:07:37.027255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.573 [2024-04-18 17:07:37.027455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.573 [2024-04-18 17:07:37.027483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.573 [2024-04-18 17:07:37.027499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.573 [2024-04-18 17:07:37.027750] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.573 [2024-04-18 17:07:37.028000] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.573 [2024-04-18 17:07:37.028025] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.573 [2024-04-18 17:07:37.028041] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.573 [2024-04-18 17:07:37.031610] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.573 [2024-04-18 17:07:37.040857] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.573 [2024-04-18 17:07:37.041262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.573 [2024-04-18 17:07:37.041447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.573 [2024-04-18 17:07:37.041477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.573 [2024-04-18 17:07:37.041495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.573 [2024-04-18 17:07:37.041733] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.573 [2024-04-18 17:07:37.041975] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.573 [2024-04-18 17:07:37.041999] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.573 [2024-04-18 17:07:37.042015] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.573 [2024-04-18 17:07:37.045585] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.573 [2024-04-18 17:07:37.054831] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.573 [2024-04-18 17:07:37.055249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.573 [2024-04-18 17:07:37.055395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.573 [2024-04-18 17:07:37.055422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.573 [2024-04-18 17:07:37.055438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.573 [2024-04-18 17:07:37.055689] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.573 [2024-04-18 17:07:37.055931] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.573 [2024-04-18 17:07:37.055955] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.573 [2024-04-18 17:07:37.055971] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.573 [2024-04-18 17:07:37.059542] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.573 [2024-04-18 17:07:37.068780] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.573 [2024-04-18 17:07:37.069160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.573 [2024-04-18 17:07:37.069309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.573 [2024-04-18 17:07:37.069337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.573 [2024-04-18 17:07:37.069354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.573 [2024-04-18 17:07:37.069603] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.573 [2024-04-18 17:07:37.069847] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.573 [2024-04-18 17:07:37.069877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.573 [2024-04-18 17:07:37.069895] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.573 [2024-04-18 17:07:37.073457] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.573 [2024-04-18 17:07:37.082692] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.573 [2024-04-18 17:07:37.083155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.573 [2024-04-18 17:07:37.083303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.573 [2024-04-18 17:07:37.083332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.573 [2024-04-18 17:07:37.083350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.573 [2024-04-18 17:07:37.083597] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.573 [2024-04-18 17:07:37.083841] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.573 [2024-04-18 17:07:37.083866] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.573 [2024-04-18 17:07:37.083882] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.573 [2024-04-18 17:07:37.087441] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.573 [2024-04-18 17:07:37.096682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.573 [2024-04-18 17:07:37.097087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.097228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.097257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.574 [2024-04-18 17:07:37.097275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.574 [2024-04-18 17:07:37.097526] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.574 [2024-04-18 17:07:37.097768] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.574 [2024-04-18 17:07:37.097793] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.574 [2024-04-18 17:07:37.097809] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.574 [2024-04-18 17:07:37.101362] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.574 [2024-04-18 17:07:37.110602] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.574 [2024-04-18 17:07:37.111122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.111328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.111356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.574 [2024-04-18 17:07:37.111373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.574 [2024-04-18 17:07:37.111620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.574 [2024-04-18 17:07:37.111873] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.574 [2024-04-18 17:07:37.111898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.574 [2024-04-18 17:07:37.111920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.574 [2024-04-18 17:07:37.115486] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.574 [2024-04-18 17:07:37.124507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.574 [2024-04-18 17:07:37.124983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.125135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.125163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.574 [2024-04-18 17:07:37.125180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.574 [2024-04-18 17:07:37.125431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.574 [2024-04-18 17:07:37.125672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.574 [2024-04-18 17:07:37.125698] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.574 [2024-04-18 17:07:37.125714] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.574 [2024-04-18 17:07:37.129266] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.574 [2024-04-18 17:07:37.138502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.574 [2024-04-18 17:07:37.138921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.139098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.139125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.574 [2024-04-18 17:07:37.139143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.574 [2024-04-18 17:07:37.139394] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.574 [2024-04-18 17:07:37.139637] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.574 [2024-04-18 17:07:37.139662] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.574 [2024-04-18 17:07:37.139678] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.574 [2024-04-18 17:07:37.143229] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.574 [2024-04-18 17:07:37.152470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.574 [2024-04-18 17:07:37.152881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.152997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.153024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.574 [2024-04-18 17:07:37.153042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.574 [2024-04-18 17:07:37.153279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.574 [2024-04-18 17:07:37.153536] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.574 [2024-04-18 17:07:37.153561] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.574 [2024-04-18 17:07:37.153576] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.574 [2024-04-18 17:07:37.157133] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.574 [2024-04-18 17:07:37.166359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.574 [2024-04-18 17:07:37.166785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.166940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.166968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.574 [2024-04-18 17:07:37.166985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.574 [2024-04-18 17:07:37.167223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.574 [2024-04-18 17:07:37.167478] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.574 [2024-04-18 17:07:37.167504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.574 [2024-04-18 17:07:37.167521] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.574 [2024-04-18 17:07:37.171075] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.574 [2024-04-18 17:07:37.180307] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.574 [2024-04-18 17:07:37.180735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.180894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.180923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.574 [2024-04-18 17:07:37.180941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.574 [2024-04-18 17:07:37.181179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.574 [2024-04-18 17:07:37.181438] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.574 [2024-04-18 17:07:37.181463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.574 [2024-04-18 17:07:37.181480] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.574 [2024-04-18 17:07:37.185031] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.574 [2024-04-18 17:07:37.194254] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.574 [2024-04-18 17:07:37.194667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.194896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.194925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.574 [2024-04-18 17:07:37.194944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.574 [2024-04-18 17:07:37.195181] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.574 [2024-04-18 17:07:37.195439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.574 [2024-04-18 17:07:37.195465] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.574 [2024-04-18 17:07:37.195481] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.574 [2024-04-18 17:07:37.199038] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.574 [2024-04-18 17:07:37.208267] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.574 [2024-04-18 17:07:37.208688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.208921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.208951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.574 [2024-04-18 17:07:37.208969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.574 [2024-04-18 17:07:37.209207] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.574 [2024-04-18 17:07:37.209465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.574 [2024-04-18 17:07:37.209491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.574 [2024-04-18 17:07:37.209507] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.574 [2024-04-18 17:07:37.213059] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.574 [2024-04-18 17:07:37.222119] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.574 [2024-04-18 17:07:37.222538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.574 [2024-04-18 17:07:37.222763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.575 [2024-04-18 17:07:37.222816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.575 [2024-04-18 17:07:37.222835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.575 [2024-04-18 17:07:37.223072] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.575 [2024-04-18 17:07:37.223315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.575 [2024-04-18 17:07:37.223340] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.575 [2024-04-18 17:07:37.223356] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.575 [2024-04-18 17:07:37.226921] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.575 [2024-04-18 17:07:37.235941] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.575 [2024-04-18 17:07:37.236359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.575 [2024-04-18 17:07:37.236558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.575 [2024-04-18 17:07:37.236588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.575 [2024-04-18 17:07:37.236607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.575 [2024-04-18 17:07:37.236845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.575 [2024-04-18 17:07:37.237088] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.575 [2024-04-18 17:07:37.237112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.575 [2024-04-18 17:07:37.237127] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.575 [2024-04-18 17:07:37.240690] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.575 [2024-04-18 17:07:37.249927] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.575 [2024-04-18 17:07:37.250405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.575 [2024-04-18 17:07:37.250564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.575 [2024-04-18 17:07:37.250592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.575 [2024-04-18 17:07:37.250609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.575 [2024-04-18 17:07:37.250847] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.575 [2024-04-18 17:07:37.251089] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.575 [2024-04-18 17:07:37.251114] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.575 [2024-04-18 17:07:37.251130] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.575 [2024-04-18 17:07:37.254693] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.575 [2024-04-18 17:07:37.263943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.575 [2024-04-18 17:07:37.264358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.575 [2024-04-18 17:07:37.264575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.575 [2024-04-18 17:07:37.264605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.575 [2024-04-18 17:07:37.264623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.575 [2024-04-18 17:07:37.264862] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.575 [2024-04-18 17:07:37.265103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.575 [2024-04-18 17:07:37.265128] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.575 [2024-04-18 17:07:37.265143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.575 [2024-04-18 17:07:37.268721] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.834 [2024-04-18 17:07:37.277974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.834 [2024-04-18 17:07:37.278398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.834 [2024-04-18 17:07:37.278565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.834 [2024-04-18 17:07:37.278594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.834 [2024-04-18 17:07:37.278612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.834 [2024-04-18 17:07:37.278849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.834 [2024-04-18 17:07:37.279092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.834 [2024-04-18 17:07:37.279117] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.834 [2024-04-18 17:07:37.279134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.834 [2024-04-18 17:07:37.282700] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.834 [2024-04-18 17:07:37.291946] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.835 [2024-04-18 17:07:37.292350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.292532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.292567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.835 [2024-04-18 17:07:37.292586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.835 [2024-04-18 17:07:37.292823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.835 [2024-04-18 17:07:37.293066] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.835 [2024-04-18 17:07:37.293091] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.835 [2024-04-18 17:07:37.293107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.835 [2024-04-18 17:07:37.296676] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.835 [2024-04-18 17:07:37.305911] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.835 [2024-04-18 17:07:37.306327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.306498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.306528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.835 [2024-04-18 17:07:37.306546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.835 [2024-04-18 17:07:37.306784] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.835 [2024-04-18 17:07:37.307026] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.835 [2024-04-18 17:07:37.307051] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.835 [2024-04-18 17:07:37.307068] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.835 [2024-04-18 17:07:37.310634] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.835 [2024-04-18 17:07:37.319866] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.835 [2024-04-18 17:07:37.320272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.320446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.320483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.835 [2024-04-18 17:07:37.320506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.835 [2024-04-18 17:07:37.320746] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.835 [2024-04-18 17:07:37.320989] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.835 [2024-04-18 17:07:37.321014] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.835 [2024-04-18 17:07:37.321030] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.835 [2024-04-18 17:07:37.324596] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.835 [2024-04-18 17:07:37.333839] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.835 [2024-04-18 17:07:37.334248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.334427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.334456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.835 [2024-04-18 17:07:37.334479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.835 [2024-04-18 17:07:37.334717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.835 [2024-04-18 17:07:37.334959] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.835 [2024-04-18 17:07:37.334983] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.835 [2024-04-18 17:07:37.334999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.835 [2024-04-18 17:07:37.338566] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.835 [2024-04-18 17:07:37.347802] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.835 [2024-04-18 17:07:37.348210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.348338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.348365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.835 [2024-04-18 17:07:37.348390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.835 [2024-04-18 17:07:37.348630] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.835 [2024-04-18 17:07:37.348871] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.835 [2024-04-18 17:07:37.348896] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.835 [2024-04-18 17:07:37.348913] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.835 [2024-04-18 17:07:37.352470] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.835 [2024-04-18 17:07:37.361713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.835 [2024-04-18 17:07:37.362097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.362216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.362244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.835 [2024-04-18 17:07:37.362261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.835 [2024-04-18 17:07:37.362513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.835 [2024-04-18 17:07:37.362756] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.835 [2024-04-18 17:07:37.362782] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.835 [2024-04-18 17:07:37.362798] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.835 [2024-04-18 17:07:37.366349] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.835 [2024-04-18 17:07:37.375591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.835 [2024-04-18 17:07:37.376000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.376181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.376209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.835 [2024-04-18 17:07:37.376226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.835 [2024-04-18 17:07:37.376485] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.835 [2024-04-18 17:07:37.376729] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.835 [2024-04-18 17:07:37.376755] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.835 [2024-04-18 17:07:37.376771] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.835 [2024-04-18 17:07:37.380325] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.835 [2024-04-18 17:07:37.389559] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.835 [2024-04-18 17:07:37.389975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.390127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.390154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.835 [2024-04-18 17:07:37.390172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.835 [2024-04-18 17:07:37.390423] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.835 [2024-04-18 17:07:37.390665] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.835 [2024-04-18 17:07:37.390690] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.835 [2024-04-18 17:07:37.390706] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.835 [2024-04-18 17:07:37.394259] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.835 [2024-04-18 17:07:37.403500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.835 [2024-04-18 17:07:37.403908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.404089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.404117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.835 [2024-04-18 17:07:37.404134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.835 [2024-04-18 17:07:37.404372] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.835 [2024-04-18 17:07:37.404629] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.835 [2024-04-18 17:07:37.404654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.835 [2024-04-18 17:07:37.404669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.835 [2024-04-18 17:07:37.408222] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.835 [2024-04-18 17:07:37.417454] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.835 [2024-04-18 17:07:37.417868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.418143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.835 [2024-04-18 17:07:37.418172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.836 [2024-04-18 17:07:37.418190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.836 [2024-04-18 17:07:37.418443] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.836 [2024-04-18 17:07:37.418692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.836 [2024-04-18 17:07:37.418718] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.836 [2024-04-18 17:07:37.418734] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.836 [2024-04-18 17:07:37.422284] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.836 [2024-04-18 17:07:37.431349] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.836 [2024-04-18 17:07:37.431767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.432005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.432057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.836 [2024-04-18 17:07:37.432076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.836 [2024-04-18 17:07:37.432314] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.836 [2024-04-18 17:07:37.432571] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.836 [2024-04-18 17:07:37.432597] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.836 [2024-04-18 17:07:37.432613] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.836 [2024-04-18 17:07:37.436166] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.836 [2024-04-18 17:07:37.445183] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.836 [2024-04-18 17:07:37.445601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.445845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.445897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.836 [2024-04-18 17:07:37.445915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.836 [2024-04-18 17:07:37.446153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.836 [2024-04-18 17:07:37.446409] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.836 [2024-04-18 17:07:37.446434] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.836 [2024-04-18 17:07:37.446449] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.836 [2024-04-18 17:07:37.450005] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.836 [2024-04-18 17:07:37.459038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.836 [2024-04-18 17:07:37.459421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.459653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.459701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.836 [2024-04-18 17:07:37.459719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.836 [2024-04-18 17:07:37.459957] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.836 [2024-04-18 17:07:37.460200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.836 [2024-04-18 17:07:37.460230] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.836 [2024-04-18 17:07:37.460247] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.836 [2024-04-18 17:07:37.463806] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.836 [2024-04-18 17:07:37.473031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.836 [2024-04-18 17:07:37.473418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.473593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.473622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.836 [2024-04-18 17:07:37.473640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.836 [2024-04-18 17:07:37.473878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.836 [2024-04-18 17:07:37.474120] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.836 [2024-04-18 17:07:37.474144] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.836 [2024-04-18 17:07:37.474160] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.836 [2024-04-18 17:07:37.477718] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.836 [2024-04-18 17:07:37.486940] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.836 [2024-04-18 17:07:37.487349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.487533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.487563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.836 [2024-04-18 17:07:37.487581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.836 [2024-04-18 17:07:37.487819] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.836 [2024-04-18 17:07:37.488061] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.836 [2024-04-18 17:07:37.488086] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.836 [2024-04-18 17:07:37.488102] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.836 [2024-04-18 17:07:37.491659] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.836 [2024-04-18 17:07:37.500901] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.836 [2024-04-18 17:07:37.501282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.501438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.501468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.836 [2024-04-18 17:07:37.501486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.836 [2024-04-18 17:07:37.501722] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.836 [2024-04-18 17:07:37.501965] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.836 [2024-04-18 17:07:37.501991] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.836 [2024-04-18 17:07:37.502012] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.836 [2024-04-18 17:07:37.505582] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.836 [2024-04-18 17:07:37.514842] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.836 [2024-04-18 17:07:37.515355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.515548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.515587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.836 [2024-04-18 17:07:37.515605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.836 [2024-04-18 17:07:37.515842] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.836 [2024-04-18 17:07:37.516085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.836 [2024-04-18 17:07:37.516110] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.836 [2024-04-18 17:07:37.516126] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.836 [2024-04-18 17:07:37.519696] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.836 [2024-04-18 17:07:37.528766] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.836 [2024-04-18 17:07:37.529178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.529352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.836 [2024-04-18 17:07:37.529388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:21.836 [2024-04-18 17:07:37.529409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:21.836 [2024-04-18 17:07:37.529655] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:21.836 [2024-04-18 17:07:37.529897] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.836 [2024-04-18 17:07:37.529921] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.836 [2024-04-18 17:07:37.529937] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.836 [2024-04-18 17:07:37.533497] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.096 [2024-04-18 17:07:37.542726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.096 [2024-04-18 17:07:37.543150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.543298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.543327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.096 [2024-04-18 17:07:37.543345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.096 [2024-04-18 17:07:37.543592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.096 [2024-04-18 17:07:37.543835] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.096 [2024-04-18 17:07:37.543860] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.096 [2024-04-18 17:07:37.543876] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.096 [2024-04-18 17:07:37.547434] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.096 [2024-04-18 17:07:37.556669] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.096 [2024-04-18 17:07:37.557089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.557204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.557233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.096 [2024-04-18 17:07:37.557251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.096 [2024-04-18 17:07:37.557500] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.096 [2024-04-18 17:07:37.557742] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.096 [2024-04-18 17:07:37.557766] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.096 [2024-04-18 17:07:37.557782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.096 [2024-04-18 17:07:37.561333] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.096 [2024-04-18 17:07:37.570597] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.096 [2024-04-18 17:07:37.570994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.571146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.571175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.096 [2024-04-18 17:07:37.571193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.096 [2024-04-18 17:07:37.571442] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.096 [2024-04-18 17:07:37.571684] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.096 [2024-04-18 17:07:37.571708] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.096 [2024-04-18 17:07:37.571724] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.096 [2024-04-18 17:07:37.575282] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.096 [2024-04-18 17:07:37.584551] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.096 [2024-04-18 17:07:37.584935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.585075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.585104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.096 [2024-04-18 17:07:37.585122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.096 [2024-04-18 17:07:37.585359] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.096 [2024-04-18 17:07:37.585611] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.096 [2024-04-18 17:07:37.585641] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.096 [2024-04-18 17:07:37.585657] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.096 [2024-04-18 17:07:37.589222] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.096 [2024-04-18 17:07:37.598486] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.096 [2024-04-18 17:07:37.598904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.599034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.599062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.096 [2024-04-18 17:07:37.599080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.096 [2024-04-18 17:07:37.599317] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.096 [2024-04-18 17:07:37.599572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.096 [2024-04-18 17:07:37.599598] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.096 [2024-04-18 17:07:37.599614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.096 [2024-04-18 17:07:37.603174] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.096 [2024-04-18 17:07:37.612421] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.096 [2024-04-18 17:07:37.612780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.612954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.612982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.096 [2024-04-18 17:07:37.612999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.096 [2024-04-18 17:07:37.613236] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.096 [2024-04-18 17:07:37.613494] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.096 [2024-04-18 17:07:37.613519] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.096 [2024-04-18 17:07:37.613535] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.096 [2024-04-18 17:07:37.617093] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.096 [2024-04-18 17:07:37.626445] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.096 [2024-04-18 17:07:37.626862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.627061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.096 [2024-04-18 17:07:37.627114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.096 [2024-04-18 17:07:37.627132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.096 [2024-04-18 17:07:37.627370] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.096 [2024-04-18 17:07:37.627623] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.096 [2024-04-18 17:07:37.627647] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.096 [2024-04-18 17:07:37.627663] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.096 [2024-04-18 17:07:37.631219] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.096 [2024-04-18 17:07:37.640285] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.097 [2024-04-18 17:07:37.640714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.640924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.640953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.097 [2024-04-18 17:07:37.640971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.097 [2024-04-18 17:07:37.641209] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.097 [2024-04-18 17:07:37.641464] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.097 [2024-04-18 17:07:37.641489] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.097 [2024-04-18 17:07:37.641506] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.097 [2024-04-18 17:07:37.645060] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.097 [2024-04-18 17:07:37.654291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.097 [2024-04-18 17:07:37.654726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.654859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.654888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.097 [2024-04-18 17:07:37.654905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.097 [2024-04-18 17:07:37.655142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.097 [2024-04-18 17:07:37.655394] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.097 [2024-04-18 17:07:37.655419] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.097 [2024-04-18 17:07:37.655435] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.097 [2024-04-18 17:07:37.658989] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.097 [2024-04-18 17:07:37.668218] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.097 [2024-04-18 17:07:37.668632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.668876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.668906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.097 [2024-04-18 17:07:37.668924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.097 [2024-04-18 17:07:37.669161] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.097 [2024-04-18 17:07:37.669416] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.097 [2024-04-18 17:07:37.669441] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.097 [2024-04-18 17:07:37.669457] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.097 [2024-04-18 17:07:37.673010] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.097 [2024-04-18 17:07:37.682243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.097 [2024-04-18 17:07:37.682678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.682832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.682860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.097 [2024-04-18 17:07:37.682884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.097 [2024-04-18 17:07:37.683122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.097 [2024-04-18 17:07:37.683364] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.097 [2024-04-18 17:07:37.683399] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.097 [2024-04-18 17:07:37.683417] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.097 [2024-04-18 17:07:37.686971] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.097 [2024-04-18 17:07:37.696211] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.097 [2024-04-18 17:07:37.696616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.696801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.696829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.097 [2024-04-18 17:07:37.696847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.097 [2024-04-18 17:07:37.697083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.097 [2024-04-18 17:07:37.697326] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.097 [2024-04-18 17:07:37.697350] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.097 [2024-04-18 17:07:37.697366] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.097 [2024-04-18 17:07:37.700929] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.097 [2024-04-18 17:07:37.710155] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.097 [2024-04-18 17:07:37.710581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.710734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.710762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.097 [2024-04-18 17:07:37.710780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.097 [2024-04-18 17:07:37.711018] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.097 [2024-04-18 17:07:37.711259] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.097 [2024-04-18 17:07:37.711282] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.097 [2024-04-18 17:07:37.711298] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.097 [2024-04-18 17:07:37.714898] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.097 [2024-04-18 17:07:37.724126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.097 [2024-04-18 17:07:37.724544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.724731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.724759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.097 [2024-04-18 17:07:37.724777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.097 [2024-04-18 17:07:37.725020] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.097 [2024-04-18 17:07:37.725262] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.097 [2024-04-18 17:07:37.725285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.097 [2024-04-18 17:07:37.725301] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.097 [2024-04-18 17:07:37.728869] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.097 [2024-04-18 17:07:37.738083] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.097 [2024-04-18 17:07:37.738533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.738817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.738861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.097 [2024-04-18 17:07:37.738879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.097 [2024-04-18 17:07:37.739116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.097 [2024-04-18 17:07:37.739358] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.097 [2024-04-18 17:07:37.739390] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.097 [2024-04-18 17:07:37.739408] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.097 [2024-04-18 17:07:37.742960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.097 [2024-04-18 17:07:37.751972] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.097 [2024-04-18 17:07:37.752395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.752543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.752572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.097 [2024-04-18 17:07:37.752590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.097 [2024-04-18 17:07:37.752826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.097 [2024-04-18 17:07:37.753068] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.097 [2024-04-18 17:07:37.753092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.097 [2024-04-18 17:07:37.753108] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.097 [2024-04-18 17:07:37.756667] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.097 [2024-04-18 17:07:37.765891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.097 [2024-04-18 17:07:37.766296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.766455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.097 [2024-04-18 17:07:37.766484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.097 [2024-04-18 17:07:37.766502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.097 [2024-04-18 17:07:37.766739] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.098 [2024-04-18 17:07:37.766987] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.098 [2024-04-18 17:07:37.767011] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.098 [2024-04-18 17:07:37.767027] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.098 [2024-04-18 17:07:37.770596] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.098 [2024-04-18 17:07:37.779828] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.098 [2024-04-18 17:07:37.780248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.098 [2024-04-18 17:07:37.780410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.098 [2024-04-18 17:07:37.780440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.098 [2024-04-18 17:07:37.780457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.098 [2024-04-18 17:07:37.780695] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.098 [2024-04-18 17:07:37.780937] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.098 [2024-04-18 17:07:37.780962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.098 [2024-04-18 17:07:37.780977] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.098 [2024-04-18 17:07:37.784535] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.098 [2024-04-18 17:07:37.793759] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.098 [2024-04-18 17:07:37.794164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.098 [2024-04-18 17:07:37.794319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.098 [2024-04-18 17:07:37.794347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.098 [2024-04-18 17:07:37.794364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.098 [2024-04-18 17:07:37.794611] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.098 [2024-04-18 17:07:37.794854] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.098 [2024-04-18 17:07:37.794878] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.098 [2024-04-18 17:07:37.794893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.098 [2024-04-18 17:07:37.798459] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.357 [2024-04-18 17:07:37.807688] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.357 [2024-04-18 17:07:37.808066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.808234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.808262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.357 [2024-04-18 17:07:37.808279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.357 [2024-04-18 17:07:37.808527] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.357 [2024-04-18 17:07:37.808769] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.357 [2024-04-18 17:07:37.808799] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.357 [2024-04-18 17:07:37.808815] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.357 [2024-04-18 17:07:37.812366] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.357 [2024-04-18 17:07:37.821601] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.357 [2024-04-18 17:07:37.821983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.822136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.822164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.357 [2024-04-18 17:07:37.822181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.357 [2024-04-18 17:07:37.822428] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.357 [2024-04-18 17:07:37.822670] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.357 [2024-04-18 17:07:37.822695] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.357 [2024-04-18 17:07:37.822710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.357 [2024-04-18 17:07:37.826257] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.357 [2024-04-18 17:07:37.835481] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.357 [2024-04-18 17:07:37.835888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.836075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.836103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.357 [2024-04-18 17:07:37.836121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.357 [2024-04-18 17:07:37.836357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.357 [2024-04-18 17:07:37.836609] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.357 [2024-04-18 17:07:37.836634] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.357 [2024-04-18 17:07:37.836650] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.357 [2024-04-18 17:07:37.840199] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.357 [2024-04-18 17:07:37.849478] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.357 [2024-04-18 17:07:37.849865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.850063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.850130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.357 [2024-04-18 17:07:37.850149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.357 [2024-04-18 17:07:37.850396] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.357 [2024-04-18 17:07:37.850638] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.357 [2024-04-18 17:07:37.850663] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.357 [2024-04-18 17:07:37.850684] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.357 [2024-04-18 17:07:37.854231] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.357 [2024-04-18 17:07:37.863463] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.357 [2024-04-18 17:07:37.863874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.864050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.864079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.357 [2024-04-18 17:07:37.864097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.357 [2024-04-18 17:07:37.864334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.357 [2024-04-18 17:07:37.864587] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.357 [2024-04-18 17:07:37.864612] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.357 [2024-04-18 17:07:37.864627] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.357 [2024-04-18 17:07:37.868178] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.357 [2024-04-18 17:07:37.877399] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.357 [2024-04-18 17:07:37.877835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.878022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.878080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.357 [2024-04-18 17:07:37.878098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.357 [2024-04-18 17:07:37.878335] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.357 [2024-04-18 17:07:37.878587] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.357 [2024-04-18 17:07:37.878612] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.357 [2024-04-18 17:07:37.878628] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.357 [2024-04-18 17:07:37.882179] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.357 [2024-04-18 17:07:37.891396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.357 [2024-04-18 17:07:37.891779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.891942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.357 [2024-04-18 17:07:37.891971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.358 [2024-04-18 17:07:37.891989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.358 [2024-04-18 17:07:37.892226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.358 [2024-04-18 17:07:37.892478] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.358 [2024-04-18 17:07:37.892503] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.358 [2024-04-18 17:07:37.892518] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.358 [2024-04-18 17:07:37.896074] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.358 [2024-04-18 17:07:37.905324] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.358 [2024-04-18 17:07:37.905750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.905901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.905930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.358 [2024-04-18 17:07:37.905948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.358 [2024-04-18 17:07:37.906185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.358 [2024-04-18 17:07:37.906439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.358 [2024-04-18 17:07:37.906464] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.358 [2024-04-18 17:07:37.906480] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.358 [2024-04-18 17:07:37.910032] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.358 [2024-04-18 17:07:37.919254] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.358 [2024-04-18 17:07:37.919678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.919832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.919861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.358 [2024-04-18 17:07:37.919878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.358 [2024-04-18 17:07:37.920115] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.358 [2024-04-18 17:07:37.920357] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.358 [2024-04-18 17:07:37.920391] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.358 [2024-04-18 17:07:37.920410] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.358 [2024-04-18 17:07:37.923960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.358 [2024-04-18 17:07:37.933172] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.358 [2024-04-18 17:07:37.933559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.933832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.933891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.358 [2024-04-18 17:07:37.933909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.358 [2024-04-18 17:07:37.934147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.358 [2024-04-18 17:07:37.934400] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.358 [2024-04-18 17:07:37.934425] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.358 [2024-04-18 17:07:37.934441] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.358 [2024-04-18 17:07:37.937992] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.358 [2024-04-18 17:07:37.947023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.358 [2024-04-18 17:07:37.947430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.947646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.947702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.358 [2024-04-18 17:07:37.947719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.358 [2024-04-18 17:07:37.947957] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.358 [2024-04-18 17:07:37.948199] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.358 [2024-04-18 17:07:37.948223] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.358 [2024-04-18 17:07:37.948239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.358 [2024-04-18 17:07:37.951798] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.358 [2024-04-18 17:07:37.961014] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.358 [2024-04-18 17:07:37.961432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.961587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.961615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.358 [2024-04-18 17:07:37.961633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.358 [2024-04-18 17:07:37.961870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.358 [2024-04-18 17:07:37.962112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.358 [2024-04-18 17:07:37.962136] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.358 [2024-04-18 17:07:37.962152] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.358 [2024-04-18 17:07:37.965720] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.358 [2024-04-18 17:07:37.974943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.358 [2024-04-18 17:07:37.975360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.975513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.975542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.358 [2024-04-18 17:07:37.975560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.358 [2024-04-18 17:07:37.975797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.358 [2024-04-18 17:07:37.976040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.358 [2024-04-18 17:07:37.976063] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.358 [2024-04-18 17:07:37.976079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.358 [2024-04-18 17:07:37.979646] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.358 [2024-04-18 17:07:37.988872] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.358 [2024-04-18 17:07:37.989288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.989445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:37.989476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.358 [2024-04-18 17:07:37.989494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.358 [2024-04-18 17:07:37.989731] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.358 [2024-04-18 17:07:37.989973] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.358 [2024-04-18 17:07:37.989997] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.358 [2024-04-18 17:07:37.990012] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.358 [2024-04-18 17:07:37.993572] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.358 [2024-04-18 17:07:38.002796] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.358 [2024-04-18 17:07:38.003210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:38.003373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:38.003410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.358 [2024-04-18 17:07:38.003429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.358 [2024-04-18 17:07:38.003665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.358 [2024-04-18 17:07:38.003908] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.358 [2024-04-18 17:07:38.003932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.358 [2024-04-18 17:07:38.003947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.358 [2024-04-18 17:07:38.007505] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.358 [2024-04-18 17:07:38.016724] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.358 [2024-04-18 17:07:38.017140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:38.017293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.358 [2024-04-18 17:07:38.017321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.358 [2024-04-18 17:07:38.017339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.358 [2024-04-18 17:07:38.017586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.358 [2024-04-18 17:07:38.017830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.358 [2024-04-18 17:07:38.017854] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.358 [2024-04-18 17:07:38.017869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.359 [2024-04-18 17:07:38.021426] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.359 [2024-04-18 17:07:38.030640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.359 [2024-04-18 17:07:38.031046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.359 [2024-04-18 17:07:38.031173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.359 [2024-04-18 17:07:38.031207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.359 [2024-04-18 17:07:38.031226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.359 [2024-04-18 17:07:38.031474] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.359 [2024-04-18 17:07:38.031718] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.359 [2024-04-18 17:07:38.031742] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.359 [2024-04-18 17:07:38.031757] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.359 [2024-04-18 17:07:38.035375] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.359 [2024-04-18 17:07:38.044602] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.359 [2024-04-18 17:07:38.045080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.359 [2024-04-18 17:07:38.045257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.359 [2024-04-18 17:07:38.045285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.359 [2024-04-18 17:07:38.045303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.359 [2024-04-18 17:07:38.045551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.359 [2024-04-18 17:07:38.045796] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.359 [2024-04-18 17:07:38.045820] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.359 [2024-04-18 17:07:38.045835] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.359 [2024-04-18 17:07:38.049432] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.359 [2024-04-18 17:07:38.058446] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.359 [2024-04-18 17:07:38.058854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.359 [2024-04-18 17:07:38.059025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.359 [2024-04-18 17:07:38.059054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.359 [2024-04-18 17:07:38.059071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.359 [2024-04-18 17:07:38.059308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.359 [2024-04-18 17:07:38.059562] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.359 [2024-04-18 17:07:38.059588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.359 [2024-04-18 17:07:38.059604] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.620 [2024-04-18 17:07:38.063169] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.620 [2024-04-18 17:07:38.072420] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.620 [2024-04-18 17:07:38.072840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.620 [2024-04-18 17:07:38.073035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.620 [2024-04-18 17:07:38.073076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.620 [2024-04-18 17:07:38.073100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.620 [2024-04-18 17:07:38.073338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.620 [2024-04-18 17:07:38.073589] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.620 [2024-04-18 17:07:38.073614] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.620 [2024-04-18 17:07:38.073630] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.620 [2024-04-18 17:07:38.077181] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.620 [2024-04-18 17:07:38.086423] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.620 [2024-04-18 17:07:38.086896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.620 [2024-04-18 17:07:38.087048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.620 [2024-04-18 17:07:38.087076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.620 [2024-04-18 17:07:38.087093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.620 [2024-04-18 17:07:38.087330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.620 [2024-04-18 17:07:38.087580] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.620 [2024-04-18 17:07:38.087606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.620 [2024-04-18 17:07:38.087621] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.620 [2024-04-18 17:07:38.091196] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.620 [2024-04-18 17:07:38.100249] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.620 [2024-04-18 17:07:38.100651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.620 [2024-04-18 17:07:38.100806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.620 [2024-04-18 17:07:38.100833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.620 [2024-04-18 17:07:38.100851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.620 [2024-04-18 17:07:38.101088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.620 [2024-04-18 17:07:38.101330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.620 [2024-04-18 17:07:38.101353] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.620 [2024-04-18 17:07:38.101374] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.620 [2024-04-18 17:07:38.104934] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.620 [2024-04-18 17:07:38.114161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.620 [2024-04-18 17:07:38.114528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.620 [2024-04-18 17:07:38.114650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.620 [2024-04-18 17:07:38.114689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.620 [2024-04-18 17:07:38.114706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.620 [2024-04-18 17:07:38.114948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.621 [2024-04-18 17:07:38.115190] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.621 [2024-04-18 17:07:38.115214] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.621 [2024-04-18 17:07:38.115230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.621 [2024-04-18 17:07:38.118798] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.621 [2024-04-18 17:07:38.128032] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.621 [2024-04-18 17:07:38.128443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.128578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.128608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.621 [2024-04-18 17:07:38.128626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.621 [2024-04-18 17:07:38.128864] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.621 [2024-04-18 17:07:38.129107] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.621 [2024-04-18 17:07:38.129130] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.621 [2024-04-18 17:07:38.129146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.621 [2024-04-18 17:07:38.132704] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.621 [2024-04-18 17:07:38.141931] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.621 [2024-04-18 17:07:38.142314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.142458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.142487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.621 [2024-04-18 17:07:38.142505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.621 [2024-04-18 17:07:38.142742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.621 [2024-04-18 17:07:38.142983] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.621 [2024-04-18 17:07:38.143007] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.621 [2024-04-18 17:07:38.143023] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.621 [2024-04-18 17:07:38.146584] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.621 [2024-04-18 17:07:38.155808] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.621 [2024-04-18 17:07:38.156218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.156374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.156410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.621 [2024-04-18 17:07:38.156429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.621 [2024-04-18 17:07:38.156666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.621 [2024-04-18 17:07:38.156913] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.621 [2024-04-18 17:07:38.156938] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.621 [2024-04-18 17:07:38.156953] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.621 [2024-04-18 17:07:38.160512] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.621 [2024-04-18 17:07:38.169743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.621 [2024-04-18 17:07:38.170126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.170281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.170309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.621 [2024-04-18 17:07:38.170327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.621 [2024-04-18 17:07:38.170574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.621 [2024-04-18 17:07:38.170816] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.621 [2024-04-18 17:07:38.170840] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.621 [2024-04-18 17:07:38.170855] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.621 [2024-04-18 17:07:38.174415] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.621 [2024-04-18 17:07:38.183645] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.621 [2024-04-18 17:07:38.184027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.184211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.184240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.621 [2024-04-18 17:07:38.184258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.621 [2024-04-18 17:07:38.184505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.621 [2024-04-18 17:07:38.184761] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.621 [2024-04-18 17:07:38.184785] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.621 [2024-04-18 17:07:38.184800] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.621 [2024-04-18 17:07:38.188351] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.621 [2024-04-18 17:07:38.197600] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.621 [2024-04-18 17:07:38.198008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.198164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.198192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.621 [2024-04-18 17:07:38.198210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.621 [2024-04-18 17:07:38.198457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.621 [2024-04-18 17:07:38.198700] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.621 [2024-04-18 17:07:38.198729] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.621 [2024-04-18 17:07:38.198745] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.621 [2024-04-18 17:07:38.202293] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.621 [2024-04-18 17:07:38.211530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.621 [2024-04-18 17:07:38.211934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.212093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.212123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.621 [2024-04-18 17:07:38.212141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.621 [2024-04-18 17:07:38.212378] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.621 [2024-04-18 17:07:38.212631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.621 [2024-04-18 17:07:38.212655] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.621 [2024-04-18 17:07:38.212671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.621 [2024-04-18 17:07:38.216221] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.621 [2024-04-18 17:07:38.225448] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.621 [2024-04-18 17:07:38.225862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.226048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.226077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.621 [2024-04-18 17:07:38.226095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.621 [2024-04-18 17:07:38.226332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.621 [2024-04-18 17:07:38.226585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.621 [2024-04-18 17:07:38.226610] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.621 [2024-04-18 17:07:38.226626] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.621 [2024-04-18 17:07:38.230175] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.621 [2024-04-18 17:07:38.239411] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.621 [2024-04-18 17:07:38.239793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.239994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-04-18 17:07:38.240040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.621 [2024-04-18 17:07:38.240058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.622 [2024-04-18 17:07:38.240296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.622 [2024-04-18 17:07:38.240549] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.622 [2024-04-18 17:07:38.240573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.622 [2024-04-18 17:07:38.240594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.622 [2024-04-18 17:07:38.244143] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.622 [2024-04-18 17:07:38.253366] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.622 [2024-04-18 17:07:38.253806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.622 [2024-04-18 17:07:38.254000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.622 [2024-04-18 17:07:38.254047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.622 [2024-04-18 17:07:38.254066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.622 [2024-04-18 17:07:38.254304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.622 [2024-04-18 17:07:38.254557] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.622 [2024-04-18 17:07:38.254582] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.622 [2024-04-18 17:07:38.254597] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.622 [2024-04-18 17:07:38.258193] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.622 [2024-04-18 17:07:38.267221] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.622 [2024-04-18 17:07:38.267650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.622 [2024-04-18 17:07:38.267837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.622 [2024-04-18 17:07:38.267883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.622 [2024-04-18 17:07:38.267902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.622 [2024-04-18 17:07:38.268140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.622 [2024-04-18 17:07:38.268396] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.622 [2024-04-18 17:07:38.268431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.622 [2024-04-18 17:07:38.268446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.622 [2024-04-18 17:07:38.272009] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.622 [2024-04-18 17:07:38.281062] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.622 [2024-04-18 17:07:38.281490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.622 [2024-04-18 17:07:38.281701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.622 [2024-04-18 17:07:38.281747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.622 [2024-04-18 17:07:38.281773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.622 [2024-04-18 17:07:38.282010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.622 [2024-04-18 17:07:38.282254] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.622 [2024-04-18 17:07:38.282279] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.622 [2024-04-18 17:07:38.282295] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.622 [2024-04-18 17:07:38.285870] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.622 [2024-04-18 17:07:38.294920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.622 [2024-04-18 17:07:38.295342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.622 [2024-04-18 17:07:38.295537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.622 [2024-04-18 17:07:38.295567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.622 [2024-04-18 17:07:38.295585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.622 [2024-04-18 17:07:38.295821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.622 [2024-04-18 17:07:38.296065] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.622 [2024-04-18 17:07:38.296090] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.622 [2024-04-18 17:07:38.296106] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.622 [2024-04-18 17:07:38.299680] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.622 [2024-04-18 17:07:38.308915] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.622 [2024-04-18 17:07:38.309311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.622 [2024-04-18 17:07:38.309504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.622 [2024-04-18 17:07:38.309536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.622 [2024-04-18 17:07:38.309555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.622 [2024-04-18 17:07:38.309794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.622 [2024-04-18 17:07:38.310037] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.622 [2024-04-18 17:07:38.310063] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.622 [2024-04-18 17:07:38.310078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.622 [2024-04-18 17:07:38.313642] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.622 [2024-04-18 17:07:38.322879] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.622 [2024-04-18 17:07:38.323284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.622 [2024-04-18 17:07:38.323438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.622 [2024-04-18 17:07:38.323468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.622 [2024-04-18 17:07:38.323486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.622 [2024-04-18 17:07:38.323723] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.622 [2024-04-18 17:07:38.323966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.622 [2024-04-18 17:07:38.323991] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.622 [2024-04-18 17:07:38.324007] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.881 [2024-04-18 17:07:38.327572] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.881 [2024-04-18 17:07:38.336818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.881 [2024-04-18 17:07:38.337240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.881 [2024-04-18 17:07:38.337400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.337429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.882 [2024-04-18 17:07:38.337447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.882 [2024-04-18 17:07:38.337683] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.882 [2024-04-18 17:07:38.337924] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.882 [2024-04-18 17:07:38.337949] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.882 [2024-04-18 17:07:38.337965] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.882 [2024-04-18 17:07:38.341530] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.882 [2024-04-18 17:07:38.350765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.882 [2024-04-18 17:07:38.351229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.351446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.351480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.882 [2024-04-18 17:07:38.351515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.882 [2024-04-18 17:07:38.351753] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.882 [2024-04-18 17:07:38.351996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.882 [2024-04-18 17:07:38.352020] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.882 [2024-04-18 17:07:38.352036] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.882 [2024-04-18 17:07:38.355598] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.882 [2024-04-18 17:07:38.364619] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.882 [2024-04-18 17:07:38.365037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.365187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.365217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.882 [2024-04-18 17:07:38.365235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.882 [2024-04-18 17:07:38.365486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.882 [2024-04-18 17:07:38.365730] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.882 [2024-04-18 17:07:38.365755] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.882 [2024-04-18 17:07:38.365771] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.882 [2024-04-18 17:07:38.369323] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.882 [2024-04-18 17:07:38.378565] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.882 [2024-04-18 17:07:38.378981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.379163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.379192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.882 [2024-04-18 17:07:38.379210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.882 [2024-04-18 17:07:38.379462] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.882 [2024-04-18 17:07:38.379705] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.882 [2024-04-18 17:07:38.379730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.882 [2024-04-18 17:07:38.379747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.882 [2024-04-18 17:07:38.383299] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.882 [2024-04-18 17:07:38.392534] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.882 [2024-04-18 17:07:38.392959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.393112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.393140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.882 [2024-04-18 17:07:38.393157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.882 [2024-04-18 17:07:38.393408] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.882 [2024-04-18 17:07:38.393649] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.882 [2024-04-18 17:07:38.393675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.882 [2024-04-18 17:07:38.393691] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.882 [2024-04-18 17:07:38.397248] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.882 [2024-04-18 17:07:38.406482] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.882 [2024-04-18 17:07:38.406890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.407075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.407103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.882 [2024-04-18 17:07:38.407121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.882 [2024-04-18 17:07:38.407358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.882 [2024-04-18 17:07:38.407613] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.882 [2024-04-18 17:07:38.407639] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.882 [2024-04-18 17:07:38.407654] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.882 [2024-04-18 17:07:38.411206] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.882 [2024-04-18 17:07:38.420443] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.882 [2024-04-18 17:07:38.420847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.420993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.421021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.882 [2024-04-18 17:07:38.421043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.882 [2024-04-18 17:07:38.421281] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.882 [2024-04-18 17:07:38.421538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.882 [2024-04-18 17:07:38.421564] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.882 [2024-04-18 17:07:38.421580] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.882 [2024-04-18 17:07:38.425134] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.882 [2024-04-18 17:07:38.434355] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.882 [2024-04-18 17:07:38.434858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.435037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.435087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.882 [2024-04-18 17:07:38.435105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.882 [2024-04-18 17:07:38.435342] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.882 [2024-04-18 17:07:38.435598] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.882 [2024-04-18 17:07:38.435623] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.882 [2024-04-18 17:07:38.435640] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.882 [2024-04-18 17:07:38.439190] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.882 [2024-04-18 17:07:38.448211] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.882 [2024-04-18 17:07:38.448627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.448802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.448848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.882 [2024-04-18 17:07:38.448866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.882 [2024-04-18 17:07:38.449104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.882 [2024-04-18 17:07:38.449347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.882 [2024-04-18 17:07:38.449372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.882 [2024-04-18 17:07:38.449402] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.882 [2024-04-18 17:07:38.452957] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.882 [2024-04-18 17:07:38.462181] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.882 [2024-04-18 17:07:38.462625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.462821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.882 [2024-04-18 17:07:38.462849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.882 [2024-04-18 17:07:38.462866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.882 [2024-04-18 17:07:38.463109] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.882 [2024-04-18 17:07:38.463352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.883 [2024-04-18 17:07:38.463377] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.883 [2024-04-18 17:07:38.463407] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.883 [2024-04-18 17:07:38.467003] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.883 [2024-04-18 17:07:38.476027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.883 [2024-04-18 17:07:38.476435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.476599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.476629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.883 [2024-04-18 17:07:38.476647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.883 [2024-04-18 17:07:38.476886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.883 [2024-04-18 17:07:38.477129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.883 [2024-04-18 17:07:38.477154] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.883 [2024-04-18 17:07:38.477170] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.883 [2024-04-18 17:07:38.480733] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.883 [2024-04-18 17:07:38.489956] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.883 [2024-04-18 17:07:38.490371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.490536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.490566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.883 [2024-04-18 17:07:38.490584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.883 [2024-04-18 17:07:38.490822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.883 [2024-04-18 17:07:38.491065] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.883 [2024-04-18 17:07:38.491090] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.883 [2024-04-18 17:07:38.491106] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.883 [2024-04-18 17:07:38.494669] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.883 [2024-04-18 17:07:38.503904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.883 [2024-04-18 17:07:38.504345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.504510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.504541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.883 [2024-04-18 17:07:38.504559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.883 [2024-04-18 17:07:38.504797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.883 [2024-04-18 17:07:38.505046] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.883 [2024-04-18 17:07:38.505071] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.883 [2024-04-18 17:07:38.505087] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.883 [2024-04-18 17:07:38.508650] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.883 [2024-04-18 17:07:38.517883] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.883 [2024-04-18 17:07:38.518266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.518418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.518447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.883 [2024-04-18 17:07:38.518465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.883 [2024-04-18 17:07:38.518701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.883 [2024-04-18 17:07:38.518942] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.883 [2024-04-18 17:07:38.518966] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.883 [2024-04-18 17:07:38.518982] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.883 [2024-04-18 17:07:38.522543] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.883 [2024-04-18 17:07:38.531775] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.883 [2024-04-18 17:07:38.532184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.532368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.532408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.883 [2024-04-18 17:07:38.532427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.883 [2024-04-18 17:07:38.532665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.883 [2024-04-18 17:07:38.532906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.883 [2024-04-18 17:07:38.532931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.883 [2024-04-18 17:07:38.532947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.883 [2024-04-18 17:07:38.536508] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.883 [2024-04-18 17:07:38.545741] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.883 [2024-04-18 17:07:38.546149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.546304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.546333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.883 [2024-04-18 17:07:38.546350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.883 [2024-04-18 17:07:38.546600] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.883 [2024-04-18 17:07:38.546842] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.883 [2024-04-18 17:07:38.546873] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.883 [2024-04-18 17:07:38.546889] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.883 [2024-04-18 17:07:38.550449] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1761233 Killed "${NVMF_APP[@]}" "$@" 00:20:22.883 17:07:38 -- host/bdevperf.sh@36 -- # tgt_init 00:20:22.883 17:07:38 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:22.883 17:07:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:22.883 17:07:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:22.883 17:07:38 -- common/autotest_common.sh@10 -- # set +x 00:20:22.883 [2024-04-18 17:07:38.559681] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.883 17:07:38 -- nvmf/common.sh@470 -- # nvmfpid=1762187 00:20:22.883 17:07:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:22.883 17:07:38 -- nvmf/common.sh@471 -- # waitforlisten 1762187 00:20:22.883 [2024-04-18 17:07:38.560066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 17:07:38 -- common/autotest_common.sh@817 -- # '[' -z 1762187 ']' 00:20:22.883 [2024-04-18 17:07:38.560243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.560271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.883 [2024-04-18 17:07:38.560289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.883 17:07:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.883 17:07:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:22.883 [2024-04-18 17:07:38.560536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.883 17:07:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.883 17:07:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:22.883 [2024-04-18 17:07:38.560778] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.883 [2024-04-18 17:07:38.560803] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.883 [2024-04-18 17:07:38.560819] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.883 17:07:38 -- common/autotest_common.sh@10 -- # set +x 00:20:22.883 [2024-04-18 17:07:38.564379] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.883 [2024-04-18 17:07:38.573628] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.883 [2024-04-18 17:07:38.574038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.574205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.883 [2024-04-18 17:07:38.574233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:22.883 [2024-04-18 17:07:38.574250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:22.883 [2024-04-18 17:07:38.574498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:22.883 [2024-04-18 17:07:38.574740] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.883 [2024-04-18 17:07:38.574763] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.883 [2024-04-18 17:07:38.574784] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.883 [2024-04-18 17:07:38.578338] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.145 [2024-04-18 17:07:38.587589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.145 [2024-04-18 17:07:38.587999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.588115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.588144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.145 [2024-04-18 17:07:38.588162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.145 [2024-04-18 17:07:38.588411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.145 [2024-04-18 17:07:38.588653] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.145 [2024-04-18 17:07:38.588677] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.145 [2024-04-18 17:07:38.588693] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.145 [2024-04-18 17:07:38.592250] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.145 [2024-04-18 17:07:38.601513] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.145 [2024-04-18 17:07:38.601909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.602060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.602088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.145 [2024-04-18 17:07:38.602106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.145 [2024-04-18 17:07:38.602343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.145 [2024-04-18 17:07:38.602596] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.145 [2024-04-18 17:07:38.602621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.145 [2024-04-18 17:07:38.602637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.145 [2024-04-18 17:07:38.606194] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.145 [2024-04-18 17:07:38.607761] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:23.145 [2024-04-18 17:07:38.607830] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.145 [2024-04-18 17:07:38.615438] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.145 [2024-04-18 17:07:38.615861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.616004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.616032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.145 [2024-04-18 17:07:38.616049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.145 [2024-04-18 17:07:38.616285] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.145 [2024-04-18 17:07:38.616537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.145 [2024-04-18 17:07:38.616567] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.145 [2024-04-18 17:07:38.616584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.145 [2024-04-18 17:07:38.620137] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.145 [2024-04-18 17:07:38.629373] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.145 [2024-04-18 17:07:38.629777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.629912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.629940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.145 [2024-04-18 17:07:38.629957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.145 [2024-04-18 17:07:38.630193] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.145 [2024-04-18 17:07:38.630448] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.145 [2024-04-18 17:07:38.630472] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.145 [2024-04-18 17:07:38.630488] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.145 [2024-04-18 17:07:38.634042] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.145 [2024-04-18 17:07:38.643282] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.145 [2024-04-18 17:07:38.643651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.643828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.643856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.145 [2024-04-18 17:07:38.643873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.145 [2024-04-18 17:07:38.644119] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.145 [2024-04-18 17:07:38.644359] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.145 [2024-04-18 17:07:38.644393] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.145 [2024-04-18 17:07:38.644411] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.145 [2024-04-18 17:07:38.647965] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.145 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.145 [2024-04-18 17:07:38.657204] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.145 [2024-04-18 17:07:38.657580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.657744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.657771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.145 [2024-04-18 17:07:38.657789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.145 [2024-04-18 17:07:38.658026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.145 [2024-04-18 17:07:38.658267] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.145 [2024-04-18 17:07:38.658290] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.145 [2024-04-18 17:07:38.658310] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.145 [2024-04-18 17:07:38.661880] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.145 [2024-04-18 17:07:38.671131] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.145 [2024-04-18 17:07:38.671553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.671739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.671766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.145 [2024-04-18 17:07:38.671784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.145 [2024-04-18 17:07:38.672021] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.145 [2024-04-18 17:07:38.672262] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.145 [2024-04-18 17:07:38.672285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.145 [2024-04-18 17:07:38.672300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.145 [2024-04-18 17:07:38.675936] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.145 [2024-04-18 17:07:38.684392] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:23.145 [2024-04-18 17:07:38.684968] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.145 [2024-04-18 17:07:38.685396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.145 [2024-04-18 17:07:38.685554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.685582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.146 [2024-04-18 17:07:38.685599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.146 [2024-04-18 17:07:38.685837] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.146 [2024-04-18 17:07:38.686078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.146 [2024-04-18 17:07:38.686101] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.146 [2024-04-18 17:07:38.686116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.146 [2024-04-18 17:07:38.689696] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.146 [2024-04-18 17:07:38.698995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.146 [2024-04-18 17:07:38.699572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.699781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.699809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.146 [2024-04-18 17:07:38.699829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.146 [2024-04-18 17:07:38.700077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.146 [2024-04-18 17:07:38.700322] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.146 [2024-04-18 17:07:38.700346] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.146 [2024-04-18 17:07:38.700394] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.146 [2024-04-18 17:07:38.703959] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.146 [2024-04-18 17:07:38.713006] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.146 [2024-04-18 17:07:38.713437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.713556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.713584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.146 [2024-04-18 17:07:38.713602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.146 [2024-04-18 17:07:38.713839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.146 [2024-04-18 17:07:38.714081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.146 [2024-04-18 17:07:38.714104] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.146 [2024-04-18 17:07:38.714120] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.146 [2024-04-18 17:07:38.717687] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.146 [2024-04-18 17:07:38.726925] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.146 [2024-04-18 17:07:38.727348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.727514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.727542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.146 [2024-04-18 17:07:38.727560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.146 [2024-04-18 17:07:38.727797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.146 [2024-04-18 17:07:38.728039] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.146 [2024-04-18 17:07:38.728063] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.146 [2024-04-18 17:07:38.728078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.146 [2024-04-18 17:07:38.731637] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.146 [2024-04-18 17:07:38.740865] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.146 [2024-04-18 17:07:38.741250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.741415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.741445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.146 [2024-04-18 17:07:38.741462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.146 [2024-04-18 17:07:38.741699] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.146 [2024-04-18 17:07:38.741941] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.146 [2024-04-18 17:07:38.741964] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.146 [2024-04-18 17:07:38.741980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.146 [2024-04-18 17:07:38.745548] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.146 [2024-04-18 17:07:38.754796] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.146 [2024-04-18 17:07:38.755378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.755603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.755631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.146 [2024-04-18 17:07:38.755652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.146 [2024-04-18 17:07:38.755899] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.146 [2024-04-18 17:07:38.756144] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.146 [2024-04-18 17:07:38.756168] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.146 [2024-04-18 17:07:38.756185] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.146 [2024-04-18 17:07:38.759751] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.146 [2024-04-18 17:07:38.768770] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.146 [2024-04-18 17:07:38.769179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.769346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.769374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.146 [2024-04-18 17:07:38.769401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.146 [2024-04-18 17:07:38.769640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.146 [2024-04-18 17:07:38.769886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.146 [2024-04-18 17:07:38.769909] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.146 [2024-04-18 17:07:38.769925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.146 [2024-04-18 17:07:38.773479] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.146 [2024-04-18 17:07:38.782707] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.146 [2024-04-18 17:07:38.783101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.783283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.783312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.146 [2024-04-18 17:07:38.783329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.146 [2024-04-18 17:07:38.783577] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.146 [2024-04-18 17:07:38.783819] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.146 [2024-04-18 17:07:38.783842] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.146 [2024-04-18 17:07:38.783858] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.146 [2024-04-18 17:07:38.787415] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.146 [2024-04-18 17:07:38.796657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.146 [2024-04-18 17:07:38.797092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.797249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.146 [2024-04-18 17:07:38.797278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.146 [2024-04-18 17:07:38.797295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.146 [2024-04-18 17:07:38.797542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.146 [2024-04-18 17:07:38.797784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.146 [2024-04-18 17:07:38.797807] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.146 [2024-04-18 17:07:38.797823] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.146 [2024-04-18 17:07:38.801370] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.146 [2024-04-18 17:07:38.804017] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.146 [2024-04-18 17:07:38.804056] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.146 [2024-04-18 17:07:38.804072] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.146 [2024-04-18 17:07:38.804086] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.146 [2024-04-18 17:07:38.804099] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.146 [2024-04-18 17:07:38.804189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.146 [2024-04-18 17:07:38.804244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:23.146 [2024-04-18 17:07:38.804247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.146 [2024-04-18 17:07:38.810603] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.147 [2024-04-18 17:07:38.811100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.147 [2024-04-18 17:07:38.811227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.147 [2024-04-18 17:07:38.811255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.147 [2024-04-18 17:07:38.811274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.147 [2024-04-18 17:07:38.811526] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.147 [2024-04-18 17:07:38.811780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.147 [2024-04-18 17:07:38.811804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.147 [2024-04-18 17:07:38.811821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.147 [2024-04-18 17:07:38.815396] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.147 [2024-04-18 17:07:38.824636] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.147 [2024-04-18 17:07:38.825215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.147 [2024-04-18 17:07:38.825399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.147 [2024-04-18 17:07:38.825429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.147 [2024-04-18 17:07:38.825450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.147 [2024-04-18 17:07:38.825708] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.147 [2024-04-18 17:07:38.825954] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.147 [2024-04-18 17:07:38.825977] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.147 [2024-04-18 17:07:38.825994] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.147 [2024-04-18 17:07:38.829555] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.147 [2024-04-18 17:07:38.838596] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.147 [2024-04-18 17:07:38.839153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.147 [2024-04-18 17:07:38.839305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.147 [2024-04-18 17:07:38.839334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.147 [2024-04-18 17:07:38.839354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.147 [2024-04-18 17:07:38.839611] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.147 [2024-04-18 17:07:38.839857] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.147 [2024-04-18 17:07:38.839880] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.147 [2024-04-18 17:07:38.839898] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.147 [2024-04-18 17:07:38.843460] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.426 [2024-04-18 17:07:38.852505] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.426 [2024-04-18 17:07:38.853034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.426 [2024-04-18 17:07:38.853207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.426 [2024-04-18 17:07:38.853235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.426 [2024-04-18 17:07:38.853256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.426 [2024-04-18 17:07:38.853514] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.426 [2024-04-18 17:07:38.853762] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.426 [2024-04-18 17:07:38.853786] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.426 [2024-04-18 17:07:38.853804] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.426 [2024-04-18 17:07:38.857359] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.426 [2024-04-18 17:07:38.866398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.426 [2024-04-18 17:07:38.866858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.426 [2024-04-18 17:07:38.866994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.426 [2024-04-18 17:07:38.867023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.426 [2024-04-18 17:07:38.867043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.426 [2024-04-18 17:07:38.867287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.426 [2024-04-18 17:07:38.867551] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.426 [2024-04-18 17:07:38.867575] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.426 [2024-04-18 17:07:38.867591] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.426 [2024-04-18 17:07:38.871142] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.426 [2024-04-18 17:07:38.880496] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.426 [2024-04-18 17:07:38.881109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.426 [2024-04-18 17:07:38.881285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.426 [2024-04-18 17:07:38.881314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.426 [2024-04-18 17:07:38.881334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.426 [2024-04-18 17:07:38.881592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.426 [2024-04-18 17:07:38.881839] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.426 [2024-04-18 17:07:38.881863] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.426 [2024-04-18 17:07:38.881881] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.426 [2024-04-18 17:07:38.885442] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.426 [2024-04-18 17:07:38.894487] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.426 [2024-04-18 17:07:38.895009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.426 [2024-04-18 17:07:38.895149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.426 [2024-04-18 17:07:38.895177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.426 [2024-04-18 17:07:38.895196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.426 [2024-04-18 17:07:38.895452] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.426 [2024-04-18 17:07:38.895697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.426 [2024-04-18 17:07:38.895720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.426 [2024-04-18 17:07:38.895737] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.426 [2024-04-18 17:07:38.899296] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.426 [2024-04-18 17:07:38.908311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.426 [2024-04-18 17:07:38.908706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.908863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.908891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.427 [2024-04-18 17:07:38.908909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.427 [2024-04-18 17:07:38.909147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.427 [2024-04-18 17:07:38.909399] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.427 [2024-04-18 17:07:38.909437] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.427 [2024-04-18 17:07:38.909452] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.427 [2024-04-18 17:07:38.913004] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.427 [2024-04-18 17:07:38.922230] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.427 [2024-04-18 17:07:38.922621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.922781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.922809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.427 [2024-04-18 17:07:38.922826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.427 [2024-04-18 17:07:38.923063] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.427 [2024-04-18 17:07:38.923305] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.427 [2024-04-18 17:07:38.923328] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.427 [2024-04-18 17:07:38.923342] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.427 [2024-04-18 17:07:38.926900] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.427 [2024-04-18 17:07:38.936154] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.427 [2024-04-18 17:07:38.936581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.936764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.936792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.427 [2024-04-18 17:07:38.936810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.427 [2024-04-18 17:07:38.937047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.427 [2024-04-18 17:07:38.937288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.427 [2024-04-18 17:07:38.937311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.427 [2024-04-18 17:07:38.937326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.427 [2024-04-18 17:07:38.940886] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.427 [2024-04-18 17:07:38.950108] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.427 [2024-04-18 17:07:38.950528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.950718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.950745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.427 [2024-04-18 17:07:38.950763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.427 [2024-04-18 17:07:38.950999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.427 [2024-04-18 17:07:38.951240] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.427 [2024-04-18 17:07:38.951263] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.427 [2024-04-18 17:07:38.951284] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.427 [2024-04-18 17:07:38.954844] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.427 [2024-04-18 17:07:38.964068] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.427 [2024-04-18 17:07:38.964464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.964604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.964632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.427 [2024-04-18 17:07:38.964649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.427 [2024-04-18 17:07:38.964885] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.427 [2024-04-18 17:07:38.965126] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.427 [2024-04-18 17:07:38.965149] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.427 [2024-04-18 17:07:38.965164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.427 [2024-04-18 17:07:38.968721] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.427 [2024-04-18 17:07:38.977945] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.427 [2024-04-18 17:07:38.978330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.978512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.978541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.427 [2024-04-18 17:07:38.978558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.427 [2024-04-18 17:07:38.978794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.427 [2024-04-18 17:07:38.979035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.427 [2024-04-18 17:07:38.979058] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.427 [2024-04-18 17:07:38.979073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.427 [2024-04-18 17:07:38.982629] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.427 [2024-04-18 17:07:38.991849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.427 [2024-04-18 17:07:38.992257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.992480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:38.992511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.427 [2024-04-18 17:07:38.992528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.427 [2024-04-18 17:07:38.992765] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.427 [2024-04-18 17:07:38.993007] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.427 [2024-04-18 17:07:38.993030] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.427 [2024-04-18 17:07:38.993045] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.427 [2024-04-18 17:07:38.996619] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.427 [2024-04-18 17:07:39.005839] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.427 [2024-04-18 17:07:39.006226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:39.006390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:39.006420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.427 [2024-04-18 17:07:39.006437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.427 [2024-04-18 17:07:39.006675] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.427 [2024-04-18 17:07:39.006916] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.427 [2024-04-18 17:07:39.006940] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.427 [2024-04-18 17:07:39.006955] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.427 [2024-04-18 17:07:39.010512] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.427 [2024-04-18 17:07:39.019729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.427 [2024-04-18 17:07:39.020152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:39.020286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:39.020313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.427 [2024-04-18 17:07:39.020330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.427 [2024-04-18 17:07:39.020577] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.427 [2024-04-18 17:07:39.020819] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.427 [2024-04-18 17:07:39.020843] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.427 [2024-04-18 17:07:39.020857] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.427 [2024-04-18 17:07:39.024415] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.427 [2024-04-18 17:07:39.033635] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.427 [2024-04-18 17:07:39.034048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:39.034181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.427 [2024-04-18 17:07:39.034209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.427 [2024-04-18 17:07:39.034226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.427 [2024-04-18 17:07:39.034471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.427 [2024-04-18 17:07:39.034712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.428 [2024-04-18 17:07:39.034736] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.428 [2024-04-18 17:07:39.034751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.428 [2024-04-18 17:07:39.038298] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.428 [2024-04-18 17:07:39.047524] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.428 [2024-04-18 17:07:39.047938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.428 [2024-04-18 17:07:39.048082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.428 [2024-04-18 17:07:39.048109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.428 [2024-04-18 17:07:39.048126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.428 [2024-04-18 17:07:39.048363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.428 [2024-04-18 17:07:39.048614] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.428 [2024-04-18 17:07:39.048638] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.428 [2024-04-18 17:07:39.048653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.428 [2024-04-18 17:07:39.052203] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.428 [2024-04-18 17:07:39.061467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.428 [2024-04-18 17:07:39.061853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.428 [2024-04-18 17:07:39.061999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.428 [2024-04-18 17:07:39.062025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.428 [2024-04-18 17:07:39.062042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.428 [2024-04-18 17:07:39.062278] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.428 [2024-04-18 17:07:39.062527] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.428 [2024-04-18 17:07:39.062551] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.428 [2024-04-18 17:07:39.062566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.428 [2024-04-18 17:07:39.066116] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.428 [2024-04-18 17:07:39.075338] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.428 [2024-04-18 17:07:39.075730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.428 [2024-04-18 17:07:39.075863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.428 [2024-04-18 17:07:39.075890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.428 [2024-04-18 17:07:39.075907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.428 [2024-04-18 17:07:39.076143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.428 [2024-04-18 17:07:39.076392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.428 [2024-04-18 17:07:39.076415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.428 [2024-04-18 17:07:39.076430] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.428 [2024-04-18 17:07:39.079979] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.428 [2024-04-18 17:07:39.089207] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.428 [2024-04-18 17:07:39.089604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.428 [2024-04-18 17:07:39.089749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.428 [2024-04-18 17:07:39.089778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.428 [2024-04-18 17:07:39.089795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.428 [2024-04-18 17:07:39.090032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.428 [2024-04-18 17:07:39.090273] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.428 [2024-04-18 17:07:39.090297] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.428 [2024-04-18 17:07:39.090312] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.428 [2024-04-18 17:07:39.093871] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.428 [2024-04-18 17:07:39.103099] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.428 [2024-04-18 17:07:39.103480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.428 [2024-04-18 17:07:39.103635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.428 [2024-04-18 17:07:39.103663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.428 [2024-04-18 17:07:39.103680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.428 [2024-04-18 17:07:39.103916] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.428 [2024-04-18 17:07:39.104157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.428 [2024-04-18 17:07:39.104180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.428 [2024-04-18 17:07:39.104195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.428 [2024-04-18 17:07:39.107752] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.428 [2024-04-18 17:07:39.116988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.428 [2024-04-18 17:07:39.117377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.428 [2024-04-18 17:07:39.117516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.428 [2024-04-18 17:07:39.117544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.428 [2024-04-18 17:07:39.117561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.428 [2024-04-18 17:07:39.117798] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.428 [2024-04-18 17:07:39.118038] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.428 [2024-04-18 17:07:39.118061] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.428 [2024-04-18 17:07:39.118076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.428 [2024-04-18 17:07:39.121637] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.697 [2024-04-18 17:07:39.130867] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.697 [2024-04-18 17:07:39.131214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.131368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.131409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.697 [2024-04-18 17:07:39.131428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.697 [2024-04-18 17:07:39.131666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.697 [2024-04-18 17:07:39.131906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.697 [2024-04-18 17:07:39.131929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.697 [2024-04-18 17:07:39.131944] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.697 [2024-04-18 17:07:39.135499] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.697 [2024-04-18 17:07:39.144729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.697 [2024-04-18 17:07:39.145112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.145289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.145317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.697 [2024-04-18 17:07:39.145333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.697 [2024-04-18 17:07:39.145582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.697 [2024-04-18 17:07:39.145823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.697 [2024-04-18 17:07:39.145846] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.697 [2024-04-18 17:07:39.145861] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.697 [2024-04-18 17:07:39.149415] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.697 [2024-04-18 17:07:39.158648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.697 [2024-04-18 17:07:39.159026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.159170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.159198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.697 [2024-04-18 17:07:39.159215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.697 [2024-04-18 17:07:39.159461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.697 [2024-04-18 17:07:39.159712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.697 [2024-04-18 17:07:39.159735] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.697 [2024-04-18 17:07:39.159750] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.697 [2024-04-18 17:07:39.163298] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.697 [2024-04-18 17:07:39.172534] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.697 [2024-04-18 17:07:39.172920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.173042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.173069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.697 [2024-04-18 17:07:39.173091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.697 [2024-04-18 17:07:39.173329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.697 [2024-04-18 17:07:39.173579] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.697 [2024-04-18 17:07:39.173602] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.697 [2024-04-18 17:07:39.173618] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.697 [2024-04-18 17:07:39.177166] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.697 [2024-04-18 17:07:39.186406] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.697 [2024-04-18 17:07:39.186822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.186971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.186998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.697 [2024-04-18 17:07:39.187015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.697 [2024-04-18 17:07:39.187251] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.697 [2024-04-18 17:07:39.187502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.697 [2024-04-18 17:07:39.187527] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.697 [2024-04-18 17:07:39.187542] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.697 [2024-04-18 17:07:39.191093] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.697 [2024-04-18 17:07:39.200340] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.697 [2024-04-18 17:07:39.200732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.200850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.200877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.697 [2024-04-18 17:07:39.200894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.697 [2024-04-18 17:07:39.201131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.697 [2024-04-18 17:07:39.201372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.697 [2024-04-18 17:07:39.201405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.697 [2024-04-18 17:07:39.201421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.697 [2024-04-18 17:07:39.204968] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.697 [2024-04-18 17:07:39.214200] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.697 [2024-04-18 17:07:39.214601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.214754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.214782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.697 [2024-04-18 17:07:39.214799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.697 [2024-04-18 17:07:39.215040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.697 [2024-04-18 17:07:39.215282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.697 [2024-04-18 17:07:39.215305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.697 [2024-04-18 17:07:39.215320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.697 [2024-04-18 17:07:39.218899] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.697 [2024-04-18 17:07:39.228128] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.697 [2024-04-18 17:07:39.228486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.228635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.228662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.697 [2024-04-18 17:07:39.228679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.697 [2024-04-18 17:07:39.228915] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.697 [2024-04-18 17:07:39.229155] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.697 [2024-04-18 17:07:39.229178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.697 [2024-04-18 17:07:39.229193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.697 [2024-04-18 17:07:39.232752] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.697 [2024-04-18 17:07:39.241983] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.697 [2024-04-18 17:07:39.242338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.697 [2024-04-18 17:07:39.242497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.242525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.698 [2024-04-18 17:07:39.242542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.698 [2024-04-18 17:07:39.242778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.698 [2024-04-18 17:07:39.243019] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.698 [2024-04-18 17:07:39.243042] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.698 [2024-04-18 17:07:39.243057] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.698 [2024-04-18 17:07:39.246616] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.698 [2024-04-18 17:07:39.255838] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.698 [2024-04-18 17:07:39.256223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.256369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.256406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.698 [2024-04-18 17:07:39.256424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.698 [2024-04-18 17:07:39.256660] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.698 [2024-04-18 17:07:39.256907] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.698 [2024-04-18 17:07:39.256930] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.698 [2024-04-18 17:07:39.256945] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.698 [2024-04-18 17:07:39.260503] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.698 [2024-04-18 17:07:39.269735] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.698 [2024-04-18 17:07:39.270116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.270263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.270292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.698 [2024-04-18 17:07:39.270309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.698 [2024-04-18 17:07:39.270557] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.698 [2024-04-18 17:07:39.270798] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.698 [2024-04-18 17:07:39.270822] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.698 [2024-04-18 17:07:39.270837] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.698 [2024-04-18 17:07:39.274401] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.698 [2024-04-18 17:07:39.283629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.698 [2024-04-18 17:07:39.284003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.284178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.284206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.698 [2024-04-18 17:07:39.284223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.698 [2024-04-18 17:07:39.284469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.698 [2024-04-18 17:07:39.284711] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.698 [2024-04-18 17:07:39.284734] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.698 [2024-04-18 17:07:39.284749] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.698 [2024-04-18 17:07:39.288299] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.698 [2024-04-18 17:07:39.297586] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.698 [2024-04-18 17:07:39.297971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.298130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.298159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.698 [2024-04-18 17:07:39.298176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.698 [2024-04-18 17:07:39.298426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.698 [2024-04-18 17:07:39.298668] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.698 [2024-04-18 17:07:39.298691] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.698 [2024-04-18 17:07:39.298713] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.698 [2024-04-18 17:07:39.302265] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.698 [2024-04-18 17:07:39.311514] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.698 [2024-04-18 17:07:39.311908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.312060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.312088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.698 [2024-04-18 17:07:39.312105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.698 [2024-04-18 17:07:39.312342] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.698 [2024-04-18 17:07:39.312595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.698 [2024-04-18 17:07:39.312620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.698 [2024-04-18 17:07:39.312635] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.698 [2024-04-18 17:07:39.316210] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.698 [2024-04-18 17:07:39.325475] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.698 [2024-04-18 17:07:39.325870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.326018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.326046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.698 [2024-04-18 17:07:39.326063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.698 [2024-04-18 17:07:39.326299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.698 [2024-04-18 17:07:39.326552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.698 [2024-04-18 17:07:39.326576] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.698 [2024-04-18 17:07:39.326592] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.698 [2024-04-18 17:07:39.330140] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.698 [2024-04-18 17:07:39.339371] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.698 [2024-04-18 17:07:39.339766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.339893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.339921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.698 [2024-04-18 17:07:39.339938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.698 [2024-04-18 17:07:39.340174] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.698 [2024-04-18 17:07:39.340426] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.698 [2024-04-18 17:07:39.340450] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.698 [2024-04-18 17:07:39.340465] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.698 [2024-04-18 17:07:39.344025] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.698 [2024-04-18 17:07:39.353262] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.698 [2024-04-18 17:07:39.353654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.353818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.353846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.698 [2024-04-18 17:07:39.353863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.698 [2024-04-18 17:07:39.354098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.698 [2024-04-18 17:07:39.354339] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.698 [2024-04-18 17:07:39.354363] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.698 [2024-04-18 17:07:39.354377] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.698 [2024-04-18 17:07:39.357940] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.698 [2024-04-18 17:07:39.367171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.698 [2024-04-18 17:07:39.367550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.367705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.698 [2024-04-18 17:07:39.367733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.698 [2024-04-18 17:07:39.367751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.698 [2024-04-18 17:07:39.367987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.698 [2024-04-18 17:07:39.368228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.699 [2024-04-18 17:07:39.368251] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.699 [2024-04-18 17:07:39.368266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.699 [2024-04-18 17:07:39.371828] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.699 [2024-04-18 17:07:39.381061] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.699 [2024-04-18 17:07:39.381416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.699 [2024-04-18 17:07:39.381562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.699 [2024-04-18 17:07:39.381590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.699 [2024-04-18 17:07:39.381607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.699 [2024-04-18 17:07:39.381844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.699 [2024-04-18 17:07:39.382085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.699 [2024-04-18 17:07:39.382107] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.699 [2024-04-18 17:07:39.382123] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.699 [2024-04-18 17:07:39.385680] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.699 [2024-04-18 17:07:39.394910] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.699 [2024-04-18 17:07:39.395297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.699 [2024-04-18 17:07:39.395424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.699 [2024-04-18 17:07:39.395454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.699 [2024-04-18 17:07:39.395472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.699 [2024-04-18 17:07:39.395709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.699 [2024-04-18 17:07:39.395951] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.699 [2024-04-18 17:07:39.395974] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.699 [2024-04-18 17:07:39.395989] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.699 [2024-04-18 17:07:39.399559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.957 [2024-04-18 17:07:39.408796] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.958 [2024-04-18 17:07:39.409181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.409327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.409355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.958 [2024-04-18 17:07:39.409391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.958 [2024-04-18 17:07:39.409631] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.958 [2024-04-18 17:07:39.409873] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.958 [2024-04-18 17:07:39.409896] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.958 [2024-04-18 17:07:39.409911] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.958 [2024-04-18 17:07:39.413474] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.958 [2024-04-18 17:07:39.422703] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.958 [2024-04-18 17:07:39.423084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.423201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.423229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.958 [2024-04-18 17:07:39.423247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.958 [2024-04-18 17:07:39.423495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.958 [2024-04-18 17:07:39.423738] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.958 [2024-04-18 17:07:39.423761] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.958 [2024-04-18 17:07:39.423777] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.958 [2024-04-18 17:07:39.427327] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.958 [2024-04-18 17:07:39.436559] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.958 [2024-04-18 17:07:39.436945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.437121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.437149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.958 [2024-04-18 17:07:39.437166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.958 [2024-04-18 17:07:39.437412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.958 [2024-04-18 17:07:39.437654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.958 [2024-04-18 17:07:39.437676] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.958 [2024-04-18 17:07:39.437692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.958 [2024-04-18 17:07:39.441242] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.958 [2024-04-18 17:07:39.450477] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.958 [2024-04-18 17:07:39.450855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.451000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.451028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.958 [2024-04-18 17:07:39.451045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.958 [2024-04-18 17:07:39.451282] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.958 [2024-04-18 17:07:39.451536] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.958 [2024-04-18 17:07:39.451559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.958 [2024-04-18 17:07:39.451575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.958 [2024-04-18 17:07:39.455125] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.958 [2024-04-18 17:07:39.464357] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.958 [2024-04-18 17:07:39.464768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.464920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.464948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.958 [2024-04-18 17:07:39.464964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.958 [2024-04-18 17:07:39.465200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.958 [2024-04-18 17:07:39.465452] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.958 [2024-04-18 17:07:39.465477] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.958 [2024-04-18 17:07:39.465492] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.958 [2024-04-18 17:07:39.469043] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.958 [2024-04-18 17:07:39.478268] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.958 [2024-04-18 17:07:39.478648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.478764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.478797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.958 [2024-04-18 17:07:39.478815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.958 [2024-04-18 17:07:39.479051] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.958 [2024-04-18 17:07:39.479292] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.958 [2024-04-18 17:07:39.479316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.958 [2024-04-18 17:07:39.479331] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.958 [2024-04-18 17:07:39.482891] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.958 [2024-04-18 17:07:39.492119] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.958 [2024-04-18 17:07:39.492489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.492668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.492696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.958 [2024-04-18 17:07:39.492713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.958 [2024-04-18 17:07:39.492949] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.958 [2024-04-18 17:07:39.493189] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.958 [2024-04-18 17:07:39.493212] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.958 [2024-04-18 17:07:39.493228] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.958 [2024-04-18 17:07:39.496794] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.958 [2024-04-18 17:07:39.506074] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.958 [2024-04-18 17:07:39.506480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.506610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.506641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.958 [2024-04-18 17:07:39.506659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.958 [2024-04-18 17:07:39.506896] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.958 [2024-04-18 17:07:39.507138] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.958 [2024-04-18 17:07:39.507161] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.958 [2024-04-18 17:07:39.507176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.958 [2024-04-18 17:07:39.510737] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.958 [2024-04-18 17:07:39.519968] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.958 [2024-04-18 17:07:39.520355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.520518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.520546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.958 [2024-04-18 17:07:39.520569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.958 [2024-04-18 17:07:39.520806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.958 [2024-04-18 17:07:39.521048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.958 [2024-04-18 17:07:39.521071] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.958 [2024-04-18 17:07:39.521085] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.958 [2024-04-18 17:07:39.524343] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.958 [2024-04-18 17:07:39.533511] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.958 [2024-04-18 17:07:39.534033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.534164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.958 [2024-04-18 17:07:39.534189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.958 [2024-04-18 17:07:39.534204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.959 [2024-04-18 17:07:39.534427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.959 17:07:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:23.959 [2024-04-18 17:07:39.534644] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.959 17:07:39 -- common/autotest_common.sh@850 -- # return 0 00:20:23.959 [2024-04-18 17:07:39.534665] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.959 [2024-04-18 17:07:39.534679] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.959 17:07:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:23.959 17:07:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:23.959 17:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.959 [2024-04-18 17:07:39.537905] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.959 [2024-04-18 17:07:39.547010] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.959 [2024-04-18 17:07:39.547851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.959 [2024-04-18 17:07:39.548069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.959 [2024-04-18 17:07:39.548096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.959 [2024-04-18 17:07:39.548112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.959 [2024-04-18 17:07:39.548327] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.959 [2024-04-18 17:07:39.548584] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.959 [2024-04-18 17:07:39.548607] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.959 [2024-04-18 17:07:39.548620] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.959 [2024-04-18 17:07:39.551879] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.959 17:07:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.959 17:07:39 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:23.959 17:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.959 17:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.959 [2024-04-18 17:07:39.559015] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.959 [2024-04-18 17:07:39.560510] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.959 [2024-04-18 17:07:39.560881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.959 [2024-04-18 17:07:39.561052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.959 [2024-04-18 17:07:39.561077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.959 [2024-04-18 17:07:39.561093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.959 [2024-04-18 17:07:39.561306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.959 [2024-04-18 17:07:39.561563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.959 [2024-04-18 17:07:39.561585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.959 [2024-04-18 17:07:39.561599] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.959 17:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.959 17:07:39 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:23.959 [2024-04-18 17:07:39.564871] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.959 17:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.959 17:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.959 [2024-04-18 17:07:39.573925] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.959 [2024-04-18 17:07:39.574281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.959 [2024-04-18 17:07:39.574407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.959 [2024-04-18 17:07:39.574434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.959 [2024-04-18 17:07:39.574449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.959 [2024-04-18 17:07:39.574662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.959 [2024-04-18 17:07:39.574883] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.959 [2024-04-18 17:07:39.574902] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.959 [2024-04-18 17:07:39.574915] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.959 [2024-04-18 17:07:39.578064] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.959 [2024-04-18 17:07:39.587541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.959 [2024-04-18 17:07:39.588094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.959 [2024-04-18 17:07:39.588218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.959 [2024-04-18 17:07:39.588243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.959 [2024-04-18 17:07:39.588261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.959 [2024-04-18 17:07:39.588492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.959 [2024-04-18 17:07:39.588728] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.959 [2024-04-18 17:07:39.588749] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.959 [2024-04-18 17:07:39.588773] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.959 [2024-04-18 17:07:39.591959] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.959 Malloc0 00:20:23.959 17:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.959 17:07:39 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.959 17:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.959 17:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.959 [2024-04-18 17:07:39.601079] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.959 [2024-04-18 17:07:39.601484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.959 [2024-04-18 17:07:39.601641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.959 [2024-04-18 17:07:39.601667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.959 [2024-04-18 17:07:39.601684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.959 [2024-04-18 17:07:39.601916] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.959 [2024-04-18 17:07:39.602129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.959 [2024-04-18 17:07:39.602150] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.959 [2024-04-18 17:07:39.602164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.959 17:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.959 [2024-04-18 17:07:39.605447] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.959 17:07:39 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:23.959 17:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.959 17:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.959 17:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.959 17:07:39 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.959 17:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.959 17:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.959 [2024-04-18 17:07:39.614665] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.959 [2024-04-18 17:07:39.615051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.959 [2024-04-18 17:07:39.615194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.959 [2024-04-18 17:07:39.615218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7160 with addr=10.0.0.2, port=4420 00:20:23.959 [2024-04-18 17:07:39.615234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7160 is same with the state(5) to be set 00:20:23.959 [2024-04-18 17:07:39.615457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7160 (9): Bad file descriptor 00:20:23.959 [2024-04-18 17:07:39.615674] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.959 [2024-04-18 17:07:39.615709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.959 [2024-04-18 17:07:39.615724] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.959 [2024-04-18 17:07:39.617236] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.959 [2024-04-18 17:07:39.618935] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.959 17:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.959 17:07:39 -- host/bdevperf.sh@38 -- # wait 1761520 00:20:23.959 [2024-04-18 17:07:39.628194] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:24.217 [2024-04-18 17:07:39.664925] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:32.341 00:20:32.341 Latency(us) 00:20:32.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.341 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:32.341 Verification LBA range: start 0x0 length 0x4000 00:20:32.341 Nvme1n1 : 15.01 6254.12 24.43 10310.57 0.00 7702.54 849.54 25631.86 00:20:32.341 =================================================================================================================== 00:20:32.341 Total : 6254.12 24.43 10310.57 0.00 7702.54 849.54 25631.86 00:20:32.599 17:07:48 -- host/bdevperf.sh@39 -- # sync 00:20:32.599 17:07:48 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:32.599 17:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.599 17:07:48 -- common/autotest_common.sh@10 -- # set +x 00:20:32.599 17:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.599 17:07:48 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:20:32.599 17:07:48 -- host/bdevperf.sh@44 -- # nvmftestfini 00:20:32.599 17:07:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:32.599 17:07:48 -- nvmf/common.sh@117 -- # sync 00:20:32.599 17:07:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.599 17:07:48 -- nvmf/common.sh@120 -- # set +e 00:20:32.599 17:07:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.599 17:07:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.599 rmmod nvme_tcp 00:20:32.858 rmmod nvme_fabrics 00:20:32.858 rmmod nvme_keyring 00:20:32.858 17:07:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.858 17:07:48 -- nvmf/common.sh@124 -- # set -e 00:20:32.858 17:07:48 -- nvmf/common.sh@125 -- # return 0 00:20:32.858 17:07:48 -- nvmf/common.sh@478 -- # '[' -n 1762187 ']' 00:20:32.858 17:07:48 -- nvmf/common.sh@479 -- # killprocess 1762187 00:20:32.858 17:07:48 -- common/autotest_common.sh@936 -- # '[' -z 1762187 ']' 00:20:32.858 17:07:48 -- common/autotest_common.sh@940 -- # kill -0 1762187 00:20:32.858 17:07:48 -- common/autotest_common.sh@941 -- # uname 00:20:32.858 17:07:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.858 17:07:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1762187 00:20:32.858 17:07:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:32.858 17:07:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:32.858 17:07:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1762187' 00:20:32.858 killing process with pid 1762187 00:20:32.858 17:07:48 -- common/autotest_common.sh@955 -- # kill 1762187 00:20:32.858 17:07:48 -- common/autotest_common.sh@960 -- # wait 1762187 00:20:33.117 17:07:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:33.117 17:07:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:33.117 17:07:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:33.117 17:07:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.117 17:07:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.117 17:07:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.117 17:07:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.117 17:07:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.024 17:07:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:35.024 00:20:35.024 real 0m22.396s 00:20:35.024 user 1m0.410s 00:20:35.024 sys 0m4.078s 00:20:35.024 17:07:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:35.024 17:07:50 -- common/autotest_common.sh@10 -- # set +x 00:20:35.024 ************************************ 00:20:35.024 END TEST nvmf_bdevperf 00:20:35.024 ************************************ 00:20:35.024 17:07:50 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:20:35.024 17:07:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:35.024 17:07:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:35.024 17:07:50 -- common/autotest_common.sh@10 -- # set +x 00:20:35.283 ************************************ 00:20:35.283 START TEST nvmf_target_disconnect 00:20:35.283 ************************************ 00:20:35.283 17:07:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:20:35.283 * Looking for test storage... 00:20:35.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:35.283 17:07:50 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.283 17:07:50 -- nvmf/common.sh@7 -- # uname -s 00:20:35.283 17:07:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.283 17:07:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.283 17:07:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.283 17:07:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.283 17:07:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.283 17:07:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.283 17:07:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.283 17:07:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.283 17:07:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.283 17:07:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.283 17:07:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.283 17:07:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.283 17:07:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.283 17:07:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.283 17:07:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.283 17:07:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.283 17:07:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.283 17:07:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.283 17:07:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.283 17:07:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.284 17:07:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.284 17:07:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.284 17:07:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.284 17:07:50 -- paths/export.sh@5 -- # export PATH 00:20:35.284 17:07:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.284 17:07:50 -- nvmf/common.sh@47 -- # : 0 00:20:35.284 17:07:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:35.284 17:07:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:35.284 17:07:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.284 17:07:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.284 17:07:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.284 17:07:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:35.284 17:07:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:35.284 17:07:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:35.284 17:07:50 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:35.284 17:07:50 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:20:35.284 17:07:50 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:20:35.284 17:07:50 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:20:35.284 17:07:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:35.284 17:07:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.284 17:07:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:35.284 17:07:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:35.284 17:07:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:35.284 17:07:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.284 17:07:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.284 17:07:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.284 17:07:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:35.284 17:07:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:35.284 17:07:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:35.284 17:07:50 -- common/autotest_common.sh@10 -- # set +x 00:20:37.817 17:07:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:37.817 17:07:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.817 17:07:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.817 17:07:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.817 17:07:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.817 17:07:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.817 17:07:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.817 17:07:52 -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.817 17:07:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.817 17:07:52 -- nvmf/common.sh@296 -- # e810=() 00:20:37.817 17:07:52 -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.817 17:07:52 -- nvmf/common.sh@297 -- # x722=() 00:20:37.817 17:07:52 -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.817 17:07:52 -- nvmf/common.sh@298 -- # mlx=() 00:20:37.817 17:07:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.817 17:07:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.817 17:07:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.817 17:07:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.817 17:07:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.817 17:07:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.817 17:07:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.817 17:07:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.817 17:07:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.817 17:07:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.817 17:07:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.817 17:07:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.817 17:07:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.817 17:07:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.817 17:07:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.817 17:07:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.817 17:07:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:37.817 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:37.817 17:07:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.817 17:07:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:37.817 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:37.817 17:07:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.817 17:07:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.817 17:07:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.817 17:07:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:37.817 17:07:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.817 17:07:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:37.817 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:37.817 17:07:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.817 17:07:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.817 17:07:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.817 17:07:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:37.817 17:07:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.817 17:07:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:37.817 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:37.817 17:07:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.817 17:07:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:37.817 17:07:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:37.817 17:07:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:37.817 17:07:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:37.817 17:07:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.817 17:07:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.817 17:07:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.817 17:07:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:37.817 17:07:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.817 17:07:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.817 17:07:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:37.817 17:07:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.817 17:07:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.817 17:07:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:37.817 17:07:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:37.817 17:07:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.817 17:07:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.817 17:07:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.817 17:07:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.817 17:07:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:37.817 17:07:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.817 17:07:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.817 17:07:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.817 17:07:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:37.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:20:37.817 00:20:37.817 --- 10.0.0.2 ping statistics --- 00:20:37.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.817 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:20:37.817 17:07:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:20:37.817 00:20:37.817 --- 10.0.0.1 ping statistics --- 00:20:37.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.817 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:37.817 17:07:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.817 17:07:53 -- nvmf/common.sh@411 -- # return 0 00:20:37.817 17:07:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:37.817 17:07:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.817 17:07:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:37.817 17:07:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:37.817 17:07:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.817 17:07:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:37.817 17:07:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:37.817 17:07:53 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:20:37.817 17:07:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:37.817 17:07:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:37.817 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:20:37.817 ************************************ 00:20:37.817 START TEST nvmf_target_disconnect_tc1 00:20:37.817 ************************************ 00:20:37.817 17:07:53 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:20:37.817 17:07:53 -- host/target_disconnect.sh@32 -- # set +e 00:20:37.817 17:07:53 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:37.817 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.817 [2024-04-18 17:07:53.237287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.817 [2024-04-18 17:07:53.237506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.817 [2024-04-18 17:07:53.237538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a3ad0 with addr=10.0.0.2, port=4420 00:20:37.817 [2024-04-18 17:07:53.237573] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:37.817 [2024-04-18 17:07:53.237601] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:37.817 [2024-04-18 17:07:53.237616] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:20:37.817 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:20:37.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:20:37.817 Initializing NVMe Controllers 00:20:37.818 17:07:53 -- host/target_disconnect.sh@33 -- # trap - ERR 00:20:37.818 17:07:53 -- host/target_disconnect.sh@33 -- # print_backtrace 00:20:37.818 17:07:53 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:20:37.818 17:07:53 -- common/autotest_common.sh@1139 -- # return 0 00:20:37.818 17:07:53 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:20:37.818 17:07:53 -- host/target_disconnect.sh@41 -- # set -e 00:20:37.818 00:20:37.818 real 0m0.094s 00:20:37.818 user 0m0.038s 00:20:37.818 sys 0m0.055s 00:20:37.818 17:07:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:37.818 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:20:37.818 ************************************ 00:20:37.818 END TEST nvmf_target_disconnect_tc1 00:20:37.818 ************************************ 00:20:37.818 17:07:53 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:20:37.818 17:07:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:37.818 17:07:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:37.818 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:20:37.818 ************************************ 00:20:37.818 START TEST nvmf_target_disconnect_tc2 00:20:37.818 ************************************ 00:20:37.818 17:07:53 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:20:37.818 17:07:53 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:20:37.818 17:07:53 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:37.818 17:07:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:37.818 17:07:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:37.818 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:20:37.818 17:07:53 -- nvmf/common.sh@470 -- # nvmfpid=1765359 00:20:37.818 17:07:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:37.818 17:07:53 -- nvmf/common.sh@471 -- # waitforlisten 1765359 00:20:37.818 17:07:53 -- common/autotest_common.sh@817 -- # '[' -z 1765359 ']' 00:20:37.818 17:07:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.818 17:07:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:37.818 17:07:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.818 17:07:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:37.818 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:20:37.818 [2024-04-18 17:07:53.422572] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:37.818 [2024-04-18 17:07:53.422668] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.818 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.818 [2024-04-18 17:07:53.491525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.075 [2024-04-18 17:07:53.604601] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.075 [2024-04-18 17:07:53.604665] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.075 [2024-04-18 17:07:53.604690] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.075 [2024-04-18 17:07:53.604701] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.075 [2024-04-18 17:07:53.604711] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.075 [2024-04-18 17:07:53.604795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:38.075 [2024-04-18 17:07:53.604859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:38.075 [2024-04-18 17:07:53.604927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:38.075 [2024-04-18 17:07:53.604925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:38.639 17:07:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:38.639 17:07:54 -- common/autotest_common.sh@850 -- # return 0 00:20:38.639 17:07:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:38.639 17:07:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:38.639 17:07:54 -- common/autotest_common.sh@10 -- # set +x 00:20:38.897 17:07:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.897 17:07:54 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:38.897 17:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.897 17:07:54 -- common/autotest_common.sh@10 -- # set +x 00:20:38.897 Malloc0 00:20:38.897 17:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.897 17:07:54 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:38.897 17:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.897 17:07:54 -- common/autotest_common.sh@10 -- # set +x 00:20:38.897 [2024-04-18 17:07:54.388961] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.897 17:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.897 17:07:54 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:38.897 17:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.897 17:07:54 -- common/autotest_common.sh@10 -- # set +x 00:20:38.897 17:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.897 17:07:54 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:38.897 17:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.897 17:07:54 -- common/autotest_common.sh@10 -- # set +x 00:20:38.897 17:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.897 17:07:54 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.897 17:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.897 17:07:54 -- common/autotest_common.sh@10 -- # set +x 00:20:38.897 [2024-04-18 17:07:54.417207] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.897 17:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.897 17:07:54 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:38.897 17:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.897 17:07:54 -- common/autotest_common.sh@10 -- # set +x 00:20:38.897 17:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.897 17:07:54 -- host/target_disconnect.sh@50 -- # reconnectpid=1765513 00:20:38.897 17:07:54 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.897 17:07:54 -- host/target_disconnect.sh@52 -- # sleep 2 00:20:38.897 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.806 17:07:56 -- host/target_disconnect.sh@53 -- # kill -9 1765359 00:20:40.806 17:07:56 -- host/target_disconnect.sh@55 -- # sleep 2 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Write completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Write completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Write completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Write completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Write completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Write completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Write completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Write completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Write completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Write completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 [2024-04-18 17:07:56.442835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.806 starting I/O failed 00:20:40.806 Read completed with error (sct=0, sc=8) 00:20:40.807 starting I/O failed 00:20:40.807 Read completed with error (sct=0, sc=8) 00:20:40.807 starting I/O failed 00:20:40.807 Write completed with error (sct=0, sc=8) 00:20:40.807 starting I/O failed 00:20:40.807 Read completed with error (sct=0, sc=8) 00:20:40.807 starting I/O failed 00:20:40.807 [2024-04-18 17:07:56.443251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:40.807 [2024-04-18 17:07:56.443462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.443595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.443620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.443764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.443901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.443926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.444038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.444192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.444218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.444345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.444469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.444496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.444609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.444722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.444747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.444911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.445083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.445108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.445248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.445391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.445417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.445555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.445714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.445743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.445907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.446027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.446065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.446181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.446321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.446346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.446462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.446595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.446620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.446754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.446879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.446904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.447035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.447147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.447173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.447328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.447433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.447460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.447593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.447705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.447730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.447836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.448066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.448092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.448228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.448332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.448357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.448501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.448598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.448624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.448785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.448919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.448945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.449050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.449185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.449211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.449345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.449458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.449484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.449619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.449753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.449778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.449883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.450047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.450073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.450217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.450353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.807 [2024-04-18 17:07:56.450378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.807 qpair failed and we were unable to recover it. 00:20:40.807 [2024-04-18 17:07:56.450520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.450654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.450679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.450808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.450966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.450992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.451133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.451275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.451303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.451463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.451594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.451620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.451776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.451934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.451959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.452114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.452223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.452258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.452403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.452511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.452536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.452648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.452809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.452834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.452995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.453101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.453128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.453259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.453417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.453458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.453564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.453708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.453736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.453985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.454167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.454192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.454327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.454466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.454493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.454607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.454763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.454789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.454905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.455037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.455062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.455180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.455287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.455315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.455490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.455629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.455661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.455796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.455945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.455970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.456082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.456193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.456219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.456353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.456491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.456517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.456650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.456818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.456843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.456951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.457079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.457104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.457234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.457370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.457400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.457503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.457610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.457640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.457836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.457934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.457960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.808 [2024-04-18 17:07:56.458100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.458279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.808 [2024-04-18 17:07:56.458304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.808 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.458418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.458524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.458549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.458659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.458788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.458814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.458922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.459064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.459092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.459208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.459353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.459378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.459538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.459665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.459691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.459882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.460041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.460066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.460166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.460262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.460288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.460427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.460559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.460585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.460761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.460889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.460915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.461074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.461201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.461227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.461341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.461460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.461486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.461622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.461784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.461809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.461906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.462007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.462033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.462163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.462293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.462318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.462458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.462616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.462642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.462797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.462993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.463019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.463178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.463287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.463313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.463450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.463578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.463604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.463709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.463850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.463876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.463970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.464116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.464142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.464281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.464426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.464453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.464571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.464721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.464746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.464907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.465034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.465060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.465200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.465360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.465394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.465534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.465682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.465710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.465866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.466007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.466033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.809 qpair failed and we were unable to recover it. 00:20:40.809 [2024-04-18 17:07:56.466137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.809 [2024-04-18 17:07:56.466267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.466293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.466434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.466555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.466583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.466728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.466890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.466915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.467048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.467204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.467230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.467370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.467573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.467604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.467741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.467899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.467925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.468039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.468168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.468194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.468353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.468466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.468493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.468624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.468758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.468784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.468891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.468994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.469020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.469124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.469232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.469258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.469369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.469479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.469505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.469605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.469764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.469789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.469953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.470056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.470092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.470195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.470326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.470359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.470527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.470658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.470684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.470848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.470991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.471019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.471166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.471278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.471303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.471431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.471571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.471598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.471787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.471926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.471952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.472093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.472190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.472216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.472397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.472515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.472557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.472713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.472845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.472871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.473002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.473113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.473139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.473276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.473415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.473441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.473574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.473742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.810 [2024-04-18 17:07:56.473769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.810 qpair failed and we were unable to recover it. 00:20:40.810 [2024-04-18 17:07:56.473903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.474042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.474067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.474168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.474334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.474359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.474498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.474630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.474657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.474817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.474919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.474944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.475104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.475266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.475292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.475478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.475609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.475647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.475783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.475897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.475923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.476055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.476199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.476227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.476367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.476529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.476555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.476691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.476821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.476846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.477012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.477114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.477142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.477283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.477434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.477460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.477621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.477759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.477784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.477883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.478042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.478068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.478225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.478322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.478348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.478519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.478622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.478649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.478811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.478943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.478969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.811 qpair failed and we were unable to recover it. 00:20:40.811 [2024-04-18 17:07:56.479075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.811 [2024-04-18 17:07:56.479207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.479233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.479370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.479509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.479535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.479700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.479904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.479930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.480090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.480251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.480276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.480391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.480500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.480526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.480626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.480751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.480776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.480949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.481059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.481085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.481244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.481346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.481372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.481514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.481644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.481670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.481812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.481981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.482007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.482176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.482315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.482342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.482479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.482606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.482631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.482794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.482940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.482966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.483077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.483183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.483209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.483349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.483475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.483502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.483617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.483750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.483776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.483902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.484035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.484060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.484214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.484340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.484366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.484524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.484653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.484679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.484838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.484998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.485024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.485157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.485275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.485303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.485468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.485600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.485626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.485793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.485931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.485961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.486099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.486218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.486244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.486373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.486521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.486546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.486683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.486782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.486808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.486944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.487049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.487075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.487209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.487308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.812 [2024-04-18 17:07:56.487334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.812 qpair failed and we were unable to recover it. 00:20:40.812 [2024-04-18 17:07:56.487474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.487630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.487660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.487794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.487901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.487927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.488035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.488169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.488195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.488324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.488432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.488470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.488603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.488738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.488764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.488898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.489062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.489087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.489192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.489319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.489345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.489486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.489646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.489672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.489808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.489968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.489998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.490177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.490308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.490337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.490459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.490617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.490643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.490810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.490939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.490965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.491108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.491237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.491262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.491368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.491556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.491583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.491699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.491884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.491910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.492048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.492173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.492199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.492316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.492436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.492465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.492594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.492756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.492781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.492921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.493080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.493106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.493242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.493372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.493404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.493579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.493684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.493711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.493846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.494008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.494034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.494173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.494282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.494309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.494418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.494557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.494583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.494721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.494876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.494902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.495014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.495178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.495204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.495303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.495436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.813 [2024-04-18 17:07:56.495466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.813 qpair failed and we were unable to recover it. 00:20:40.813 [2024-04-18 17:07:56.495625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.495765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.495790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.495988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.496124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.496150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.496284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.496464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.496507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.496681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.496792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.496818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.496962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.497103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.497128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.497237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.497369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.497401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.497522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.497652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.497678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.497811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.497938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.497964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.498101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.498241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.498267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.498401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.498546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.498572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.498710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.498871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.498897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.499058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.499187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.499212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.499317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.499449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.499475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.499587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.499719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.499745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.499918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.500025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.500051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.500212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.500315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.500340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.500524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.500631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.500657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.500772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.500929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.500954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.501089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.501214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.501244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.501434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.501560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.501586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.501700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.501833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.501858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.502014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.502116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.502142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.502250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.502384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.502410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.502571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.502740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.814 [2024-04-18 17:07:56.502765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.814 qpair failed and we were unable to recover it. 00:20:40.814 [2024-04-18 17:07:56.502912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.503045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.503071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.503207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.503339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.503366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.503490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.503628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.503653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.503781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.503887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.503912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.504040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.504214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.504240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.504379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.504514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.504540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.504706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.504855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.504883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.505038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.505165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.505190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.505332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.505476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.505502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.505635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.505757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.505783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.505910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.506018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.506047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.506194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.506349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.506374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.506515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.506672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.506697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.506854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.507047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.507072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.507171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.507279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.507304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.507421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.507552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.815 [2024-04-18 17:07:56.507578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:40.815 qpair failed and we were unable to recover it. 00:20:40.815 [2024-04-18 17:07:56.507690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.507822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.507847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.508044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.508141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.508167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.508275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.508436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.508461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.508593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.508727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.508752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.508880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.509029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.509057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.509212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.509369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.509402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.509515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.509674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.509699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.509851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.509955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.509981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.510093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.510217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.510242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.510363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.510555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.510581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.510742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.510875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.510901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.511011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.511116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.511158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.511318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.511428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.511454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.511589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.511741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.511769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.511916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.512078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.512104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.512243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.512415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.512441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.512591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.512761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.512789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.512936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.513061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.513087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.513267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.513412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.513441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.513588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.513715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.513744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.513887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.514050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.514075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.514214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.514375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.514406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.514567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.514727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.514752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.514890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.515048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.091 [2024-04-18 17:07:56.515073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.091 qpair failed and we were unable to recover it. 00:20:41.091 [2024-04-18 17:07:56.515234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.515359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.515420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.515572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.515716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.515744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.515922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.516019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.516044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.516231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.516346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.516374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.516512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.516655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.516684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.516850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.517006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.517052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.517225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.517367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.517403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.517526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.517674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.517700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.517829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.517987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.518012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.518147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.518279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.518305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.518456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.518628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.518657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.518784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.518919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.518945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.519123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.519280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.519306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.519466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.519616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.519644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.519794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.519921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.519947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.520073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.520232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.520266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.520392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.520532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.520561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.520714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.520846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.520871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.521001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.521176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.521204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.521319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.521473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.521499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.521636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.521769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.521794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.521903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.522009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.522035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.522194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.522341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.522369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.522526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.522684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.522710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.522845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.523019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.523048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.523177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.523335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.523361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.523531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.523636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.523662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.523792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.523918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.523946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.092 qpair failed and we were unable to recover it. 00:20:41.092 [2024-04-18 17:07:56.524118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.524293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.092 [2024-04-18 17:07:56.524320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.524456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.524555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.524581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.524686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.524830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.524855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.524991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.525100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.525145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.525263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.525401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.525427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.525555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.525676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.525704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.525819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.525986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.526012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.526137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.526264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.526290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.526491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.526664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.526692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.526845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.526982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.527011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.527193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.527368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.527404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.527547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.527751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.527806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.527939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.528096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.528121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.528287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.528459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.528512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.528634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.528783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.528812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.528934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.529081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.529109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.529231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.529327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.529353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.529454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.529585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.529610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.529742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.529861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.529889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.530009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.530106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.530132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.530286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.530450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.530476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.530635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.530761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.530787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.530917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.531076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.531119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.531230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.531369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.093 [2024-04-18 17:07:56.531418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.093 qpair failed and we were unable to recover it. 00:20:41.093 [2024-04-18 17:07:56.531577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.531709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.531736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.531898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.532027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.532052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.532209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.532366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.532401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.532587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.532745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.532817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.532952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.533087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.533113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.533285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.533439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.533470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.533582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.533724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.533751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.533908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.534013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.534039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.534176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.534306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.534331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.534466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.534595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.534624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.534785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.534893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.534919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.535065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.535181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.535210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.535413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.535533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.535562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.535718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.535843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.535868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.536000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.536143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.536175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.536360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.536589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.536619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.536775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.536882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.536907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.537036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.537170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.537195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.537294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.537451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.537477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.537572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.537704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.537729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.537910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.538049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.538077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.538252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.538412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.538438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.538572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.538697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.538723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.538884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.539058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.539086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.539260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.539404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.539446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.094 qpair failed and we were unable to recover it. 00:20:41.094 [2024-04-18 17:07:56.539609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.539736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.094 [2024-04-18 17:07:56.539777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.539895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.540009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.540037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.540176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.540335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.540363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.540552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.540722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.540779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.540922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.541064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.541093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.541269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.541430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.541471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.541618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.541785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.541827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.542025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.542129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.542156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.542281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.542440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.542466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.542597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.542739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.542764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.542937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.543071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.543097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.543238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.543368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.543421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.543550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.543655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.543680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.543867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.544007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.544035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.544202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.544390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.544418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.544547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.544649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.544674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.544834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.544992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.545020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.545193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.545318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.545344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.545480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.545616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.545642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.545801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.545959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.545984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.546139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.546288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.546317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.546501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.546610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.546636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.546738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.546850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.546875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.547037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.547194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.547222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.547348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.547492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.547518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.547697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.547835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.547863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.548004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.548170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.548198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.095 qpair failed and we were unable to recover it. 00:20:41.095 [2024-04-18 17:07:56.548435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.548540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.095 [2024-04-18 17:07:56.548566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.548727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.548879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.548907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.549057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.549185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.549210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.549320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.549422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.549449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.549605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.549742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.549770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.549892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.550078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.550103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.550238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.550365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.550397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.550548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.550660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.550689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.550862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.550972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.551000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.551173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.551301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.551343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.551502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.551633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.551659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.551823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.551934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.551964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.552079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.552236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.552261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.552468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.552603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.552633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.552729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.552886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.552912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.553043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.553171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.553197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.553353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.553463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.553489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.553595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.553724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.553766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.553893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.554004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.554029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.554184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.554302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.554330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.554472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.554627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.554653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.554787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.554919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.554945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.555072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.555198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.555226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.555410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.555567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.555593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.555730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.555871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.096 [2024-04-18 17:07:56.555913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.096 qpair failed and we were unable to recover it. 00:20:41.096 [2024-04-18 17:07:56.556061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.556201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.556230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.556392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.556525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.556551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.556710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.556883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.556911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.557093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.557227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.557252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.557399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.557592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.557618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.557753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.557883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.557911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.558075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.558210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.558236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.558337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.558450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.558493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.558674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.558853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.558881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.559059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.559193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.559221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.559329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.559474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.559503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.559631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.559758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.559783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.559909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.560035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.560063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.560207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.560465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.560508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.560672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.560805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.560830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.560998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.561184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.561213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.561378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.561521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.561547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.561675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.561806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.561832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.562024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.562198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.562224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.562359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.562528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.562557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.562719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.562884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.562910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.563041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.563193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.563221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.563341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.563512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.563539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.563670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.563775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.563801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.563927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.564058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.564086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.564218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.564329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.564357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.564502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.564676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.564704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.564850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.565020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.565048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.097 [2024-04-18 17:07:56.565208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.565334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.097 [2024-04-18 17:07:56.565360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.097 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.565507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.565636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.565678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.565824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.565962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.565987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.566158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.566260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.566286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.566421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.566555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.566581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.566684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.566831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.566860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.567014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.567187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.567216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.567375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.567518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.567545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.567704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.567824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.567852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.568000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.568124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.568152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.568297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.568456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.568500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.568638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.568758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.568791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.568951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.569060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.569088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.569222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.569389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.569416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.569574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.569715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.569743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.569899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.570009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.570038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.570157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.570268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.570295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.570433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.570564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.570590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.570741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.570887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.570916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.571099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.571228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.571254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.571412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.571558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.571587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.571726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.571952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.572007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.572128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.572298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.572328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.572487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.572622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.572648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.572821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.572979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.573005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.573170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.573283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.573309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.573437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.573561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.573589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.573724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.573881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.573907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.574070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.574197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.574226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.098 qpair failed and we were unable to recover it. 00:20:41.098 [2024-04-18 17:07:56.574367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.574504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.098 [2024-04-18 17:07:56.574530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.574640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.574741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.574767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.574928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.575053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.575095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.575277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.575435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.575462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.575620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.575749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.575775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.575932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.576116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.576142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.576277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.576427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.576469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.576584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.576760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.576789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.576930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.577103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.577132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.577279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.577438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.577467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.577616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.577744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.577770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.577932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.578082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.578111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.578263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.578368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.578399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.578537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.578648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.578678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.578826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.578926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.578951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.579078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.579264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.579290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.579409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.579542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.579568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.579750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.579894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.579923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.580077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.580208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.580234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.580418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.580540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.580566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.580727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.580885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.580914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.581058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.581204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.581233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.581424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.581561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.581587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.581717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.581881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.581907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.582076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.582194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.582223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.582367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.582516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.582545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.582673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.582834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.582860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.099 qpair failed and we were unable to recover it. 00:20:41.099 [2024-04-18 17:07:56.583014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.583123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.099 [2024-04-18 17:07:56.583152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.583324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.583489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.583516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.583622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.583748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.583791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.583950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.584079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.584104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.584251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.584394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.584423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.584567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.584708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.584736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.584853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.584991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.585025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.585173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.585311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.585339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.585474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.585579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.585605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.585746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.585914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.585940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.586101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.586261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.586287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.586417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.586578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.586604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.586704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.586836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.586862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.587018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.587176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.587205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.587395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.587507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.587536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.587688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.587816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.587842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.587950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.588107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.588137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.588271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.588406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.588433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.588567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.588721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.588750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.588900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.589042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.589085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.589260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.589405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.589434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.589662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.589854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.589906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.590076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.590223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.590251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.100 qpair failed and we were unable to recover it. 00:20:41.100 [2024-04-18 17:07:56.590428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.590569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.100 [2024-04-18 17:07:56.590595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.590740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.590925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.590988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.591162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.591307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.591335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.591458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.591585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.591614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.591764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.591941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.591970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.592114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.592253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.592282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.592464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.592604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.592633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.592753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.592861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.592890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.593047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.593151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.593177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.593284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.593412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.593438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.593572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.593719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.593748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.593895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.594058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.594083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.594214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.594347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.594373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.594549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.594687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.594716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.594969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.595139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.595167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.595314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.595492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.595521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.595658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.595776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.595801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.595930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.596077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.596107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.596269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.596405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.596432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.596537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.596692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.596720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.596841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.597000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.597026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.597201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.597348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.597377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.597542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.597701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.597727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.597832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.597935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.597960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.598106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.598223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.598250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.598392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.598580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.598609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.598758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.598872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.598913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.599073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.599245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.599274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.599453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.599555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.599581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.101 qpair failed and we were unable to recover it. 00:20:41.101 [2024-04-18 17:07:56.599742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.101 [2024-04-18 17:07:56.599901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.599927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.600104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.600281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.600309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.600458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.600588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.600614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.600753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.600885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.600911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.601075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.601232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.601275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.601459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.601574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.601600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.601762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.601910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.601939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.602062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.602231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.602257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.602416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.602537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.602566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.602717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.602934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.602962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.603067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.603181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.603211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.603396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.603531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.603559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.603722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.603879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.603905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.604065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.604218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.604247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.604402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.604560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.604586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.604751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.604860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.604907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.605060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.605201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.605230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.605456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.605656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.605685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.605830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.605941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.605969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.606090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.606228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.606255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.606436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.606582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.606611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.606752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.606899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.606928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.607099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.607274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.607302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.607426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.607555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.607581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.607687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.607785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.607811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.607947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.608092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.608121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.608264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.608403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.608432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.608589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.608718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.608743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.102 [2024-04-18 17:07:56.608900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.609079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.102 [2024-04-18 17:07:56.609105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.102 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.609240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.609414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.609463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.609619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.609754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.609780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.609940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.610122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.610150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.610295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.610401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.610431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.610546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.610753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.610782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.610902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.611016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.611045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.611170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.611306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.611332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.611539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.611670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.611712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.611852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.611997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.612026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.612195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.612336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.612365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.612530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.612662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.612688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.612845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.613001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.613027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.613184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.613334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.613360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.613517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.613668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.613697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.613836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.613999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.614048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.614195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.614343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.614372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.614542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.614713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.614743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.614902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.615034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.615060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.615192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.615353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.615379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.615548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.615692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.615720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.615893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.616082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.616137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.616291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.616441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.616470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.616628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.616785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.616812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.616947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.617103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.617144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.617305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.617490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.617519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.617673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.617822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.617863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.618030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.618208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.618238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.618343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.618501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.103 [2024-04-18 17:07:56.618531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.103 qpair failed and we were unable to recover it. 00:20:41.103 [2024-04-18 17:07:56.618646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.618857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.618917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.619085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.619210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.619240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.619379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.619523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.619549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.619704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.619875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.619904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.620011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.620150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.620178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.620325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.620456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.620483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.620613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.620709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.620735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.620836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.620968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.620993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.621113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.621230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.621269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.621425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.621556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.621587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.621696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.621858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.621884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.621978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.622082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.622108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.622223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.622359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.622392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.622499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.622651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.622677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.622804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.622932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.622958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.623118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.623299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.623327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.623483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.623582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.623608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.623747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.623883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.623912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.624064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.624171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.624197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.624334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.624470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.624500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.624630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.624769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.624797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.624978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.625116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.625144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.625305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.625469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.625513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.625661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.625824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.625850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.625985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.626116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.626144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.626292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.626408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.626438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.626622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.626718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.626744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.626871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.627040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.627068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.627221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.627359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.627395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.104 qpair failed and we were unable to recover it. 00:20:41.104 [2024-04-18 17:07:56.627561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.104 [2024-04-18 17:07:56.627697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.627723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.627861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.627995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.628021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.628177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.628342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.628371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.628523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.628686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.628712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.628816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.628923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.628949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.629050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.629153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.629179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.629280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.629404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.629431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.629538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.629653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.629679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.629785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.629941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.629967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.630135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.630304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.630346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.630526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.630683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.630708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.630868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.630986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.631016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.631170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.631355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.631388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.631526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.631661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.631687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.631798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.631898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.631924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.632056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.632154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.632180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.632303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.632406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.632433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.632592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.632726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.632751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.632848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.632975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.633001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.633139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.633268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.633294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.633421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.633532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.633558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.633654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.633814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.633841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.634000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.634155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.634184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.105 [2024-04-18 17:07:56.634331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.634552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.105 [2024-04-18 17:07:56.634578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.105 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.634688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.634846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.634872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.635004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.635130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.635157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.635298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.635474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.635503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.635684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.635838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.635880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.636050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.636200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.636228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.636362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.636462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.636489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.636607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.636750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.636779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.636918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.637064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.637093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.637240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.637395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.637425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.637559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.637689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.637715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.637875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.637979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.638005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.638141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.638273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.638299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.638449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.638604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.638632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.638761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.638921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.638948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.639110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.639265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.639290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.639427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.639558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.639585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.639735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.639846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.639875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.640049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.640200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.640233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.640403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.640599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.640625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.640757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.640905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.640946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.641103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.641299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.641325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.641439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.641599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.641626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.641789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.641963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.641992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.642164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.642306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.642335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.642528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.642678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.642706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.642857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.642956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.642982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.643166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.643278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.643307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.643455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.643603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.643631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.106 qpair failed and we were unable to recover it. 00:20:41.106 [2024-04-18 17:07:56.643758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.106 [2024-04-18 17:07:56.643920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.643946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.644072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.644169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.644195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.644326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.644444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.644473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.644630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.644731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.644758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.644936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.645108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.645137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.645289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.645449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.645476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.645597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.645740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.645769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.645987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.646119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.646147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.646302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.646481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.646511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.646637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.646796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.646822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.646945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.647106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.647132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.647238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.647418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.647449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.647620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.647777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.647804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.647930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.648085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.648111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.648298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.648422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.648452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.648591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.648752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.648778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.648908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.649063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.649089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.649184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.649292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.649320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.649470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.649587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.649616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.649785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.649976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.650002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.650134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.650316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.650342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.650476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.650610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.650637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.650814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.650927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.650955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.651105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.651249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.651278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.651414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.651513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.651539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.651696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.651854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.651880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.651988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.652119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.107 [2024-04-18 17:07:56.652144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.107 qpair failed and we were unable to recover it. 00:20:41.107 [2024-04-18 17:07:56.652292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.652413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.652442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.652582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.652718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.652747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.652897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.653037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.653063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.653247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.653400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.653445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.653583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.653802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.653853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.654008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.654155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.654184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.654314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.654480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.654507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.654641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.654775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.654801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.654925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.655098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.655127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.655284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.655414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.655441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.655572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.655714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.655740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.655918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.656066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.656095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.656266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.656416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.656446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.656616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.656770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.656801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.656906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.657069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.657096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.657248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.657372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.657412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.657544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.657667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.657710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.657843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.657993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.658022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.658170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.658329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.658355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.658513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.658677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.658703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.658837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.658993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.659021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.659200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.659354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.659429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.659590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.659715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.659741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.659875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.660001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.660030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.660165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.660308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.660337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.660521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.660698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.660727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.660906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.661037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.661063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.661195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.661374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.661407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.108 qpair failed and we were unable to recover it. 00:20:41.108 [2024-04-18 17:07:56.661564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.108 [2024-04-18 17:07:56.661703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.661730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.661874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.662045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.662074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.662200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.662359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.662395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.662544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.662699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.662728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.663003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.663171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.663199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.663372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.663515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.663544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.663707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.663814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.663840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.663991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.664139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.664168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.664293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.664472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.664502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.664621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.664758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.664786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.664962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.665140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.665168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.665313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.665455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.665484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.665621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.665775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.665826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.665975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.666126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.666154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.666312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.666415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.666442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.666588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.666704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.666732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.666860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.667011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.667040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.667188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.667327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.667356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.667521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.667654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.667680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.667814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.667925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.667953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.668109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.668274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.668302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.668445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.668571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.668599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.668743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.668880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.668905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.669039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.669194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.669222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.669366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.669521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.669550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.669701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.669873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.669902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.670032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.670196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.670223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.670390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.670499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.670527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.109 qpair failed and we were unable to recover it. 00:20:41.109 [2024-04-18 17:07:56.670654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.109 [2024-04-18 17:07:56.670767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.670795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.670933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.671071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.671099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.671278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.671408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.671450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.671603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.671742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.671768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.672033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.672193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.672222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.672368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.672519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.672547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.672727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.672832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.672858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.673006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.673157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.673185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.673298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.673417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.673451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.673565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.673748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.673777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.673926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.674053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.674079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.674261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.674398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.674427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.674657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.674859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.674887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.675070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.675201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.675227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.675364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.675503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.675530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.675641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.675807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.675833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.675959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.676155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.676180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.676338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.676505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.676535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.676685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.676860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.676886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.677021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.677124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.677149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.677295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.677457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.677485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.677659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.677789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.677815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.677945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.678077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.678103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.678261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.678456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.678482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.678619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.678778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.678807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.110 qpair failed and we were unable to recover it. 00:20:41.110 [2024-04-18 17:07:56.678957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.110 [2024-04-18 17:07:56.679116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.679157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.679303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.679435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.679461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.679648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.679784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.679809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.679921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.680101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.680129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.680319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.680449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.680475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.680613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.680741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.680767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.680895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.681057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.681083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.681242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.681396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.681426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.681582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.681719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.681745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.681871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.682005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.682033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.682167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.682314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.682342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.682522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.682662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.682691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.682826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.683007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.683036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.683215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.683325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.683351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.683524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.683658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.683684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.683785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.683891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.683917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.684079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.684236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.684264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.684430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.684545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.684571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.684754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.684887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.684954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.685138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.685310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.685338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.685524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.685670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.685699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.685875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.686057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.686123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.686267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.686450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.686478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.686641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.686766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.686803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.686968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.687073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.687099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.687254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.687398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.687441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.687578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.687758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.687787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.687938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.688082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.688110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.688284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.688412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.688443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.688560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.688715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.688740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.688876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.689009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.689051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.111 qpair failed and we were unable to recover it. 00:20:41.111 [2024-04-18 17:07:56.689194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.111 [2024-04-18 17:07:56.689337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.689365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.689540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.689697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.689722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.689830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.689987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.690013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.690170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.690307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.690340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.690536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.690641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.690667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.690797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.690927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.690955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.691105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.691232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.691258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.691368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.691472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.691498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.691681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.691863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.691891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.692039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.692154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.692183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.692326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.692464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.692491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.692651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.692851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.692876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.693008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.693106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.693130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.693289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.693418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.693449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.693580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.693709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.693735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.693837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.693952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.693977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.694112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.694241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.694269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.694390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.694550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.694575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.694705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.694806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.694832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.695027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.695143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.695169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.695328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.695438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.695467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.695591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.695740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.695768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.695884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.695989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.696015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.696130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.696269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.696297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.696457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.696570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.696597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.696727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.696862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.696890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.697041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.697155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.697180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.697340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.697483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.697512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.697621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.697733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.697761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.697873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.697988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.698016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.698150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.698282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.698308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.698408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.698589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.698617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.698724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.698895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.698921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.112 qpair failed and we were unable to recover it. 00:20:41.112 [2024-04-18 17:07:56.699026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.112 [2024-04-18 17:07:56.699139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.699165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.699304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.699417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.699444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.699635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.699744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.699769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.699931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.700057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.700085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.700272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.700431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.700458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.700571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.700709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.700734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.700915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.701024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.701052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.701162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.701283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.701311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.701438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.701557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.701587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.701713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.701868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.701893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.702017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.702137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.702165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.702336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.702481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.702511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.702658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.702775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.702803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.702931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.703085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.703111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.703215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.703317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.703365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.703519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.703632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.703658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.703771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.703904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.703929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.704072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.704179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.704204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.704319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.704478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.704505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.704622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.704767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.704810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.704918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.705020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.705046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.705150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.705288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.705316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.705457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.705562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.705593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.705704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.705858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.705884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.705990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.706096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.706122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.706246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.706378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.706409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.706552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.706694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.706722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.706876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.706982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.707007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.707111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.707275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.707301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.707405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.707511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.707537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.707640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.707750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.707776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.707900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.708033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.708063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.708189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.708331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.708359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.708499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.708601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.113 [2024-04-18 17:07:56.708627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.113 qpair failed and we were unable to recover it. 00:20:41.113 [2024-04-18 17:07:56.708756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.708903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.708931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.709084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.709237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.709263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.709364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.709474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.709501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.709616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.709775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.709800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.709980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.710132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.710160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.710355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.710493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.710519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.710633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.710731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.710772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.710958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.711089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.711115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.711229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.711325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.711351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.711465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.711595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.711621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.711741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.711878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.711924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.712054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.712165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.712190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.712356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.712536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.712566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.712717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.712861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.712889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.713033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.713186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.713211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.713318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.713447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.713474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.713645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.713754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.713780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.713929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.714153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.714179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.714295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.714428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.714457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.714606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.714742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.714784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.714922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.715080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.715105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.715209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.715316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.715342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.715460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.715591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.715620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.715761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.715864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.715890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.715987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.716116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.716141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.716273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.716399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.716428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.716578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.716735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.716761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.716877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.716985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.717011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.114 qpair failed and we were unable to recover it. 00:20:41.114 [2024-04-18 17:07:56.717138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.114 [2024-04-18 17:07:56.717279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.717307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.717426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.717587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.717613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.717746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.717896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.717924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.718108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.718224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.718250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.718379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.718539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.718565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.718700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.718800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.718826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.718934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.719059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.719085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.719216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.719340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.719366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.719502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.719657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.719686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.719799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.719914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.719943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.720087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.720247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.720273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.720412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.720521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.720548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.720673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.720825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.720853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.721003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.721162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.721187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.721319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.721461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.721488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.721592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.721720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.721745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.721883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.722020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.722049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.722180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.722279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.722304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.722433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.722547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.722573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.722686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.722787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.722813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.722961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.723083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.723116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.723264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.723392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.723421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.723588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.723722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.723747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.723925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.724053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.724079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.724204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.724364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.724421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.724553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.724719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.724745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.724906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.725041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.725070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.725215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.725319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.725344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.725472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.725598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.725624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.725741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.725881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.725910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.726082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.726210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.726240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.726394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.726494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.726520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.115 [2024-04-18 17:07:56.726629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.726758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.115 [2024-04-18 17:07:56.726786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.115 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.726895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.727037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.727079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.727232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.727405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.727434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.727566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.727694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.727720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.727831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.727925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.727951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.728048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.728153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.728180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.728280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.728398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.728424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.728563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.728665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.728691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.728803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.728982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.729008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.729115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.729264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.729293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.729442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.729585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.729613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.729773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.729917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.729943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.730079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.730210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.730238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.730478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.730663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.730689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.730823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.730954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.730980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.731085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.731179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.731205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.731393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.731497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.731524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.731687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.731824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.731851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.731994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.732143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.732171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.732301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.732440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.732467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.732627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.732797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.732859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.733049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.733162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.733191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.733319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.733462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.733491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.733622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.733781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.733806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.733967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.734080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.734109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.734230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.734375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.734412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.734533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.734647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.734675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.734798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.734931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.734957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.735081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.735210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.735236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.735398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.735568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.735595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.735733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.735858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.735886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.736041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.736146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.736172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.736294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.736408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.736438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.116 [2024-04-18 17:07:56.736585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.736730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.116 [2024-04-18 17:07:56.736759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.116 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.736909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.737064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.737090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.737245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.737362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.737394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.737543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.737690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.737719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.737836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.737968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.737997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.738138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.738275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.738301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.738412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.738514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.738543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.738656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.738806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.738835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.738951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.739071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.739099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.739217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.739323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.739352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.739519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.739640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.739665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.739785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.739926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.739954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.740065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.740208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.740237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.740360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.740501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.740528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.740659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.740792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.740819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.740936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.741054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.741084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.741226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.741391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.741427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.741538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.741676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.741702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.741810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.741936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.741962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.742109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.742248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.742277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.742398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.742549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.742577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.742743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.742874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.742900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.743031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.743142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.743170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.743278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.743430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.743459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.743580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.743688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.743717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.743855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.743977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.744005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.744165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.744267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.744293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.744419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.744574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.744602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.744723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.744868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.744899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.745041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.745188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.745219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.745346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.745465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.745492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.745625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.745756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.745782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.745916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.746061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.746089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.746223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.746369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.746407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.117 [2024-04-18 17:07:56.746535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.746637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.117 [2024-04-18 17:07:56.746673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.117 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.746778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.746946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.746974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.747143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.747309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.747336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.747494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.747617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.747646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.747829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.747985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.748011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.748141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.748282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.748311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.748464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.748582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.748611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.748761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.748878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.748906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.749057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.749203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.749229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.749387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.749535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.749564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.749683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.749820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.749849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.749988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.750104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.750133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.750336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.750477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.750504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.750614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.750727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.750753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.750885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.750982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.751008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.751141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.751264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.751293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.751438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.751543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.751569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.751688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.751818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.751847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.751988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.752113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.752144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.752270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.752403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.752430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.752531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.752675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.752701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.752843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.752995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.753024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.753136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.753283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.753311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.753491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.753624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.753653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.753832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.753964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.754009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.754157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.754282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.754311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.754470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.754618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.754646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.118 qpair failed and we were unable to recover it. 00:20:41.118 [2024-04-18 17:07:56.754833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.118 [2024-04-18 17:07:56.754941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.754967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.755098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.755244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.755286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.755442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.755545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.755571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.755683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.755834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.755863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.756007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.756121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.756150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.756325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.756458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.756501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.756643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.756753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.756786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.756933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.757053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.757082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.757257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.757416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.757442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.757579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.757720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.757746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.757852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.758031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.758059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.758163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.758308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.758338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.758467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.758619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.758647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.758800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.758907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.758933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.759045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.759156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.759182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.759318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.759490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.759518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.759659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.759854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.759912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.760086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.760225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.760251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.760407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.760546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.760575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.760737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.760973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.761023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.761179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.761362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.761398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.761563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.761700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.761726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.761831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.761955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.761997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.762153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.762291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.762317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.762448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.762581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.762606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.762762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.762902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.762945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.763067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.763236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.763265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.763416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.763534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.763562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.763685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.763848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.763874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.764011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.764123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.764149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.764296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.764430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.764458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.764582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.764767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.764794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.764901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.765030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.765056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.765162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.765263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.765289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.765414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.765551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.765577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.765733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.765873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.765902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.119 qpair failed and we were unable to recover it. 00:20:41.119 [2024-04-18 17:07:56.766027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.119 [2024-04-18 17:07:56.766168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.766196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.766333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.766476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.766502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.766617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.766752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.766781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.766923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.767069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.767098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.767253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.767371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.767405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.767538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.767646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.767672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.767806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.767913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.767939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.768057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.768207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.768235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.768444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.768603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.768632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.768797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.768898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.768924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.769081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.769191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.769217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.769375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.769508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.769537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.769659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.769773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.769802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.769971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.770106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.770134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.770311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.770482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.770512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.770625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.770773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.770802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.770912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.771058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.771084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.771231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.771339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.771364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.771504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.771613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.771639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.771771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.771913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.771939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.772068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.772184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.772212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.772373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.772504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.772535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.772670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.772819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.772847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.772968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.773140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.773168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.773281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.773426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.773455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.773609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.773739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.773764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.773915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.774035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.774063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.774243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.774370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.774405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.774546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.774702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.774730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.774885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.775022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.775048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.775180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.775333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.775362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.775498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.775612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.775640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.775791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.775913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.775942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.776092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.776194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.776220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.776343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.776483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.776510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.776626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.776762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.776788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.776917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.777074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.777100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.777209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.777342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.777368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.120 qpair failed and we were unable to recover it. 00:20:41.120 [2024-04-18 17:07:56.777482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.120 [2024-04-18 17:07:56.777608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.777634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.121 qpair failed and we were unable to recover it. 00:20:41.121 [2024-04-18 17:07:56.777793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.777908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.777937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.121 qpair failed and we were unable to recover it. 00:20:41.121 [2024-04-18 17:07:56.778054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.778176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.778202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.121 qpair failed and we were unable to recover it. 00:20:41.121 [2024-04-18 17:07:56.778309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.778414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.778441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.121 qpair failed and we were unable to recover it. 00:20:41.121 [2024-04-18 17:07:56.778557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.778703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.778732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.121 qpair failed and we were unable to recover it. 00:20:41.121 [2024-04-18 17:07:56.778875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.779033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.779061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.121 qpair failed and we were unable to recover it. 00:20:41.121 [2024-04-18 17:07:56.779177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.779314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.779343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.121 qpair failed and we were unable to recover it. 00:20:41.121 [2024-04-18 17:07:56.779486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.779596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.779622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.121 qpair failed and we were unable to recover it. 00:20:41.121 [2024-04-18 17:07:56.779718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.121 [2024-04-18 17:07:56.779880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.779908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.780026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.780164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.780202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.780306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.780406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.780432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.780540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.780670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.780696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.780859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.780991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.781017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.781147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.781306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.781332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.781452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.781563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.781589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.781723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.781851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.781877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.781989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.782097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.782139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.782303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.782415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.782443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.782547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.782672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.782698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.782834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.782973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.782999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.783185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.783362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.783396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.783553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.783675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.783701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.399 qpair failed and we were unable to recover it. 00:20:41.399 [2024-04-18 17:07:56.783826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.784005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.399 [2024-04-18 17:07:56.784031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.784161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.784261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.784287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.784416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.784559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.784586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.784687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.784837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.784865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.784978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.785099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.785129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.785285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.785398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.785425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.785581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.785691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.785719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.785862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.785996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.786025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.786162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.786305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.786331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.786478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.786584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.786610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.786741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.786848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.786874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.787020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.787129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.787157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.787298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.787438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.787476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.787606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.787735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.787761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.787900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.788059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.788087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.788257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.788371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.788406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.788552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.788689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.788717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.788887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.789020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.789046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.789184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.789352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.789387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.789567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.789667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.789693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.789826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.789987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.790015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.790147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.790285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.790311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.790448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.790582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.790611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.790732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.790874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.790903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.791047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.791161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.791190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.791372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.791495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.791521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.791659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.791808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.791836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.791956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.792101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.792130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.792278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.792395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.792424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.792572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.792712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.792737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.792865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.793020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.793046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.793192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.793295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.400 [2024-04-18 17:07:56.793321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.400 qpair failed and we were unable to recover it. 00:20:41.400 [2024-04-18 17:07:56.793422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.793526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.793552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.793661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.793767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.793792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.793923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.794042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.794071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.794184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.794328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.794356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.794525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.794736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.794762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.794877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.795036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.795062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.795169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.795299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.795325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.795456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.795565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.795591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.795709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.795842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.795868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.795970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.796104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.796130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.796255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.796409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.796438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.796590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.796721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.796747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.796866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.797005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.797031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.797141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.797297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.797323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.797484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.797634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.797663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.797810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.797925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.797951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.798064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.798194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.798220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.798322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.798434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.798460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.798603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.798729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.798758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.798883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.799018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.799044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.799171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.799289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.799317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.799455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.799564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.799591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.799743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.799891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.799920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.800094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.800203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.800232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.800344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.800472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.800516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.800643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.800790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.800816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.801027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.801201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.801229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.801466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.801676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.801702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.801821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.801959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.801987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.802143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.802268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.802294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.802436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.802547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.802576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.401 [2024-04-18 17:07:56.802727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.802845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.401 [2024-04-18 17:07:56.802878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.401 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.803050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.803162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.803192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.803350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.803499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.803541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.803722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.803839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.803864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.803976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.804080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.804106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.804212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.804322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.804348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.804484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.804582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.804607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.804759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.804906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.804934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.805087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.805208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.805238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.805412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.805519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.805545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.805652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.805786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.805816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.805920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.806088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.806117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.806266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.806392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.806421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.806583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.806721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.806746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.806847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.806975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.807001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.807155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.807265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.807290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.807426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.807570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.807598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.807762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.807893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.807919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.808029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.808157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.808183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.808338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.808462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.808493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.808610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.808719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.808748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.808932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.809057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.809085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.809217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.809358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.809392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.809513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.809621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.809647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.809807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.809975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.810002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.810137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.810347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.810398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.810548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.810685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.810711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.810869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.810991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.811032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.811140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.811286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.811314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.811451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.811591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.811617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.811720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.811852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.811877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.402 [2024-04-18 17:07:56.812008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.812127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.402 [2024-04-18 17:07:56.812158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.402 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.812272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.812491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.812520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.812655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.812796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.812824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.812946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.813055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.813082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.813215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.813360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.813412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.813519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.813676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.813705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.813827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.813951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.813980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.814196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.814324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.814366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.814521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.814646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.814677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.814897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.815019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.815049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.815195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.815322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.815351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.815519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.815652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.815678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.815777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.815927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.815955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.816083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.816275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.816302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.816408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.816542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.816568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.816731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.816863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.816907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.817051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.817174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.817203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.817343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.817473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.817503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.817625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.817771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.817799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.817948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.818055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.818081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.818223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.818386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.818413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.818543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.818703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.818731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.818857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.819030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.819060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.819198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.819354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.819387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.819538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.819689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.819717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.403 [2024-04-18 17:07:56.819881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.820110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.403 [2024-04-18 17:07:56.820139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.403 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.820282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.820431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.820460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.820676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.820890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.820918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.821101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.821208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.821234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.821346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.821486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.821513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.821657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.821799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.821829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.821964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.822093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.822119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.822254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.822373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.822437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.822545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.822757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.822785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.822933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.823053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.823081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.823228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.823326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.823352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.823493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.823627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.823663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.823767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.823870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.823896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.824003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.824135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.824163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.824313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.824475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.824501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.824648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.824764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.824793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.824937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.825078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.825107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.825230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.825365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.825398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.825509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.825639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.825676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.825792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.825899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.825924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.826063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.826218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.826247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.826401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.826574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.826600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.826740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.826977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.827026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.827207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.827310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.827336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.827499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.827625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.827661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.827808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.827961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.827989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.828147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.828301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.828327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.828458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.828605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.828630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.828781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.828921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.828971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.829190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.829337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.829362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.829489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.829605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.829630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.829740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.829893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.829921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.404 qpair failed and we were unable to recover it. 00:20:41.404 [2024-04-18 17:07:56.830095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.404 [2024-04-18 17:07:56.830260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.830289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.830476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.830579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.830605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.830742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.830876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.830902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.831057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.831195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.831223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.831355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.831531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.831557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.831702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.831826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.831857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.831995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.832156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.832182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.832301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.832438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.832465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.832574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.832749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.832777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.832933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.833070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.833095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.833254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.833390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.833416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.833523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.833620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.833645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.833788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.833884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.833928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.834050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.834164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.834193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.834324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.834441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.834467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.834604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.834725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.834751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.834909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.835089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.835116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.835251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.835404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.835442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.835568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.835676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.835702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.835822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.835969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.835998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.836129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.836266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.836292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.836430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.836571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.836599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.836758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.836861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.836886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.837017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.837125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.837151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.837285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.837394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.837436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.837566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.837670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.837696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.837799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.837903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.837930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.838035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.838137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.838163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.838274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.838412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.838446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.838574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.838679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.838713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.838845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.838976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.839001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.405 [2024-04-18 17:07:56.839135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.839350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.405 [2024-04-18 17:07:56.839376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.405 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.839498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.839629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.839665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.839814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.839932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.839962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.840122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.840224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.840251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.840356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.840488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.840516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.840637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.840745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.840774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.840922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.841068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.841097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.841258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.841417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.841476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.841594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.841731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.841759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.841918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.842022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.842048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.842204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.842432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.842462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.842588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.842716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.842742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.842839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.843052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.843081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.843223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.843391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.843417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.843532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.843659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.843687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.843833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.843978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.844004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.844115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.844250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.844276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.844393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.844554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.844582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.844728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.844898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.844927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.845072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.845181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.845207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.845362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.845557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.845583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.845696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.845849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.845878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.846031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.846130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.846155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.846287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.846411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.846440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.846549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.846715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.846741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.846888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.847046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.847073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.847212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.847374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.847410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.847557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.847768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.847794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.847978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.848133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.848160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.848295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.848422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.848464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.848598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.848705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.848732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.848856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.848958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.848984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.406 qpair failed and we were unable to recover it. 00:20:41.406 [2024-04-18 17:07:56.849112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.849229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.406 [2024-04-18 17:07:56.849257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.849477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.849620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.849649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.849762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.849939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.849967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.850117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.850255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.850296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.850441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.850561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.850589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.850769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.850922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.850950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.851169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.851313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.851343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.851468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.851576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.851602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.851822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.852048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.852097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.852214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.852356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.852392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.852539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.852728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.852755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.852914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.853059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.853096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.853233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.853370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.853406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.853584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.853738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.853767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.853913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.854028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.854058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.854215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.854347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.854373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.854568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.854719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.854747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.854888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.855036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.855064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.855233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.855376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.855436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.855562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.855719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.855745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.855901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.856027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.856057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.856179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.856341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.856367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.856536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.856713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.856743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.856875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.857039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.857065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.857238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.857404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.857431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.857590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.857766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.857792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.857953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.858072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.858100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.858230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.858327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.858353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.858469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.858571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.858598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.407 qpair failed and we were unable to recover it. 00:20:41.407 [2024-04-18 17:07:56.858731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.858829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.407 [2024-04-18 17:07:56.858855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.859008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.859181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.859209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.859321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.859502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.859529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.859637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.859768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.859794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.859934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.860140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.860166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.860297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.860453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.860479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.860627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.860734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.860759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.860907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.861053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.861082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.861224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.861369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.861408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.861628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.861797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.861823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.861958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.862091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.862116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.862271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.862503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.862532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.862680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.862852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.862881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.863053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.863197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.863225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.863374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.863521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.863547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.863697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.863880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.863906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.864079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.864261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.864290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.864424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.864562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.864590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.864742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.864854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.864881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.864996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.865111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.865151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.865297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.865462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.865490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.865635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.865757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.865786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.865915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.866043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.866069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.866203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.866395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.866421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.866557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.866724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.866750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.866879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.867046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.867075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.867255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.867367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.867401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.867594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.867867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.867917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.868164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.868308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.868337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.408 qpair failed and we were unable to recover it. 00:20:41.408 [2024-04-18 17:07:56.868490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.868711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.408 [2024-04-18 17:07:56.868740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.868899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.869035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.869061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.869199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.869393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.869422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.869596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.869748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.869798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.869974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.870147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.870176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.870332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.870474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.870501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.870633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.870790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.870819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.871079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.871281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.871309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.871461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.871601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.871629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.871753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.871886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.871912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.872071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.872225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.872253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.872404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.872626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.872655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.872805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.872927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.872957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.873083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.873244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.873270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.873448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.873616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.873645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.873787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.873932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.873965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.874108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.874248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.874277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.874436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.874576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.874602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.874700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.874911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.874940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.875089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.875204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.875245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.875379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.875596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.875623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.875763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.875896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.875922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.876022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.876125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.876151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.876259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.876362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.876395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.876494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.876601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.876626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.876731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.876888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.876915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.877075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.877232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.877261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.877434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.877580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.877609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.877765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.877906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.877935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.878096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.878227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.878254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.878358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.878463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.878489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.878620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.878772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.878801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.409 qpair failed and we were unable to recover it. 00:20:41.409 [2024-04-18 17:07:56.878974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.409 [2024-04-18 17:07:56.879110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.879139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.879318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.879451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.879495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.879668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.879846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.879871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.880009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.880164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.880193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.880388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.880499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.880525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.880652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.880785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.880811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.880950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.881112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.881138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.881264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.881399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.881426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.881598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.881769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.881797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.881945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.882072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.882098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.882280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.882442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.882484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.882616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.882810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.882838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.883029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.883178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.883204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.883362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.883483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.883509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.883672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.883900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.883929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.884059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.884267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.884296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.884449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.884665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.884691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.884828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.884956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.884981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.885162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.885330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.885358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.885542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.885755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.885808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.885937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.886046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.886072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.886254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.886450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.886477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.886585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.886715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.886741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.886877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.886994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.887023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.887198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.887424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.887454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.887642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.887871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.887917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.888059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.888216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.888242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.888342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.888499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.888529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.888657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.888816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.888842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.889008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.889155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.889181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.889314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.889475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.889506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.410 [2024-04-18 17:07:56.889682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.889852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.410 [2024-04-18 17:07:56.889881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.410 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.890026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.890172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.890203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.890375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.890531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.890560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.890731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.890900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.890965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.891151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.891330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.891356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.891513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.891612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.891638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.891795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.892006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.892032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.892167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.892302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.892328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.892487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.892631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.892659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.892830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.892966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.892993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.893122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.893256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.893282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.893414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.893523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.893549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.893703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.893974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.894031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.894209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.894379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.894414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.894555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.894685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.894711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.894847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.895032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.895061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.895238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.895402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.895431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.895558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.895740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.895767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.895982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.896118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.896144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.896248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.896389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.896415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.896516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.896672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.896698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.896883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.897011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.897037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.897162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.897322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.897366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.897535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.897684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.897713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.897892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.898113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.898142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.898291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.898451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.898478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.898609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.898767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.898809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.898982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.899167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.899193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.899327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.899487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.899516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.899663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.899846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.899873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.900002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.900137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.900164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.900325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.900499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.900528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.411 qpair failed and we were unable to recover it. 00:20:41.411 [2024-04-18 17:07:56.900670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.900926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.411 [2024-04-18 17:07:56.900977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.901139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.901305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.901331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.901545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.901646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.901672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.901828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.901967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.901996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.902167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.902300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.902329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.902478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.902603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.902630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.902788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.902966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.902994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.903168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.903316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.903346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.903513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.903641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.903670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.903828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.903938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.903965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.904110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.904216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.904242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.904368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.904499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.904529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.904687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.904866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.904893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.905016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.905165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.905194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.905364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.905537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.905564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.905725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.905916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.905968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.906190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.906361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.906397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.906537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.906708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.906733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.906867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.907027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.907069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.907213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.907401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.907430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.907574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.907792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.907841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.907991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.908131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.908157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.908297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.908455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.908485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.908650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.908808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.908836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.909005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.909149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.909177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.909288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.909458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.909488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.412 [2024-04-18 17:07:56.909673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.909775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.412 [2024-04-18 17:07:56.909816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.412 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.909939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.910087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.910116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.910236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.910373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.910407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.910564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.910707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.910735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.910886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.911046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.911072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.911181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.911361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.911416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.911540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.911686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.911714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.911874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.912016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.912044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.912206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.912340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.912365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.912546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.912680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.912707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.912827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.913102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.913154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.913340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.913490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.913522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.913680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.913820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.913863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.914027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.914194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.914219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.914377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.914545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.914571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.914728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.915006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.915055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.915209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.915319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.915345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.915502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.915639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.915665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.915769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.915872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.915898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.915999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.916127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.916170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.916345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.916492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.916519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.916684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.916806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.916835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.916960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.917071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.917101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.917273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.917446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.917476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.917602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.917774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.917817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.917955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.918124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.918153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.918290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.918400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.918430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.918552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.918715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.918741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.918844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.919000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.919025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.919240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.919454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.919480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.919661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.919937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.919966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.920138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.920285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.920314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.413 qpair failed and we were unable to recover it. 00:20:41.413 [2024-04-18 17:07:56.920435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.413 [2024-04-18 17:07:56.920567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.920593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.920742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.920935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.920985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.921105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.921246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.921287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.921424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.921576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.921604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.921762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.921892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.921920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.922106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.922264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.922291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.922393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.922528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.922555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.922736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.922887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.922913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.923041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.923173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.923199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.923395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.923550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.923577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.923740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.923944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.923992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.924155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.924286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.924311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.924445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.924554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.924582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.924717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.924890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.924916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.925030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.925197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.925226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.925364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.925548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.925581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.925730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.925865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.925891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.926045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.926190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.926219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.926368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.926513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.926540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.926652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.926748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.926774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.926933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.927066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.927109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.927264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.927427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.927454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.927560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.927695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.927721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.927836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.927990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.928019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.928164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.928297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.928324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.928505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.928690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.928720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.928881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.929003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.929032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.929181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.929294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.929323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.929504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.929642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.929668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.929838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.929960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.929989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.930127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.930273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.930302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.930441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.930585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.930614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.414 [2024-04-18 17:07:56.930788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.930936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.414 [2024-04-18 17:07:56.930962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.414 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.931106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.931241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.931269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.931424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.931562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.931589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.931744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.931892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.931921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.932073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.932172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.932198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.932331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.932483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.932513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.932695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.932864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.932890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.933029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.933196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.933232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.933398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.933663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.933714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.933860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.934028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.934068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.934261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.934359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.934390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.934538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.934654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.934683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.934832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.934989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.935015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.935174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.935351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.935401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.935588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.935771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.935800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.935958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.936080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.936109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.936230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.936360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.936395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.936612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.936755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.936784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.936955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.937094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.937122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.937257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.937438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.937465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.937589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.937825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.937854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.938071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.938226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.938252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.938390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.938519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.938548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.938699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.938846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.938875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.939056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.939189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.939215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.939354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.939508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.939537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.939785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.940026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.940078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.940262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.940471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.940526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.940676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.940799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.940825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.940962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.941153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.941182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.941341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.941509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.941553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.941737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.941902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.941943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.942122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.942270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.942298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.942479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.942643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.942685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.942908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.943170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.415 [2024-04-18 17:07:56.943198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.415 qpair failed and we were unable to recover it. 00:20:41.415 [2024-04-18 17:07:56.943376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.943553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.943579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.943683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.943915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.943944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.944088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.944211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.944252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.944364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.944472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.944497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.944633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.944758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.944784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.944933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.945071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.945113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.945250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.945409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.945440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.945604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.945762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.945790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.945965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.946100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.946127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.946266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.946373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.946411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.946568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.946738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.946766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.946881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.946995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.947024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.947186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.947321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.947347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.947465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.947564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.947588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.947769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.947905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.947934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.948077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.948298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.948326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.948449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.948607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.948636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.948762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.948894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.948920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.949100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.949248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.949276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.949421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.949544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.949573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.949708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.949869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.949895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.950000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.950107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.950132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.950260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.950374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.950424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.950554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.950685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.950711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.950818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.950975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.951001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.951133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.951292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.951319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.951428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.951553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.951579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.951695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.951811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.951839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.951957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.952070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.416 [2024-04-18 17:07:56.952099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.416 qpair failed and we were unable to recover it. 00:20:41.416 [2024-04-18 17:07:56.952248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.952379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.952414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.952591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.952747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.952774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.952941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.953125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.953154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.953300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.953445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.953474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.953654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.953766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.953802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.953930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.954078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.954105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.954231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.954344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.954373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.954518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.954635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.954675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.954795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.954928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.954954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.955089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.955217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.955246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.955361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.955523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.955552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.955712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.955830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.955861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.956021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.956131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.956156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.956260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.956379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.956438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.956574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.956699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.956727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.956872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.956987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.957015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.957135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.957308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.957350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.957502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.957637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.957673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.957779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.957919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.957945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.958071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.958252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.958280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.958441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.958566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.958608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.958724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.958901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.958930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.959067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.959195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.959220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.959354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.959513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.959542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.959699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.959836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.959861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.960033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.960177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.960206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.960360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.960518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.960545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.960677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.960832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.960858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.960993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.961124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.961150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.961249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.961406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.961434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.961565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.961748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.961777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.961923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.962035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.962067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.962195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.962304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.962330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.417 [2024-04-18 17:07:56.962495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.962642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.417 [2024-04-18 17:07:56.962668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.417 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.962805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.962962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.962990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.963172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.963278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.963305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.963444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.963586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.963629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.963787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.963916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.963941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.964048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.964169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.964197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.964372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.964537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.964563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.964663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.964796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.964823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.964919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.965089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.965118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.965287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.965392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.965418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.965578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.965755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.965781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.965912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.966044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.966070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.966221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.966363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.966398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.966589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.966739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.966768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.966941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.967061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.967089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.967239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.967404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.967447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.967590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.967732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.967760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.967880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.967987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.968015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.968189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.968302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.968330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.968464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.968581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.968606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.968764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.968863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.968889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.968991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.969093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.969118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.969268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.969413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.969442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.969572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.969729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.969755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.969941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.970110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.970139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.970258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.970434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.970460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.970589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.970691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.970717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.970815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.970956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.970981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.971159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.971306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.971334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.971494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.971608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.971636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.971750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.971889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.971917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.972043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.972182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.972207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.972397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.972554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.972580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.972739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.972853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.972881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.418 [2024-04-18 17:07:56.972995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.973167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.418 [2024-04-18 17:07:56.973195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.418 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.973352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.973491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.973533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.973647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.973791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.973819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.973956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.974097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.974125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.974273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.974431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.974460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.974591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.974723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.974749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.974915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.975069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.975098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.975256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.975362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.975414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.975530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.975674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.975703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.975853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.975987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.976012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.976202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.976335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.976361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.976541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.976664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.976693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.976848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.976995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.977023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.977146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.977274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.977299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.977443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.977579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.977605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.977706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.977820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.977869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.978005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.978134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.978160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.978319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.978431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.978457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.978621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.978750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.978775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.978878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.979011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.979037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.979171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.979335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.979364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.979551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.979697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.979723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.979830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.979954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.979982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.980093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.980213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.980241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.980410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.980558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.980584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.980683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.980786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.980815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.980956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.981093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.981121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.981247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.981362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.981398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.981516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.981679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.981704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.981845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.981989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.982015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.982168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.982331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.982356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.982472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.982582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.982625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.982744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.982887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.982915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.983061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.983192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.983218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.983367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.983515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.983543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.983688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.983872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.419 [2024-04-18 17:07:56.983898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.419 qpair failed and we were unable to recover it. 00:20:41.419 [2024-04-18 17:07:56.984038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.984154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.984180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.984319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.984451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.984478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.984611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.984727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.984755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.984899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.985065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.985094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.985267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.985409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.985439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.985590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.985722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.985764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.985910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.986068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.986093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.986199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.986355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.986391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.986531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.986675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.986703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.986829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.986988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.987013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.987167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.987280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.987309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.987472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.987595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.987637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.987773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.987900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.987925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.988036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.988134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.988160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.988307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.988427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.988456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.988577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.988691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.988719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.988866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.989009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.989037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.989220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.989350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.989376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.989539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.989680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.989705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.989822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.989970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.989998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.990138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.990284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.990312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.990447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.990583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.990608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.990731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.990863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.990888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.991019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.991196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.991224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.991370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.991483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.991511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.991641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.991743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.991769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.991897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.992006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.992034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.992212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.992324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.992352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.992509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.992621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.992647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.992780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.992884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.992909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.993043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.993153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.993178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.420 qpair failed and we were unable to recover it. 00:20:41.420 [2024-04-18 17:07:56.993283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.993395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.420 [2024-04-18 17:07:56.993421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.993568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.993681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.993710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.993864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.993960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.993986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.994112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.994228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.994258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.994419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.994530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.994558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.994674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.994808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.994851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.994980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.995138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.995164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.995280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.995415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.995441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.995547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.995651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.995676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.995832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.995990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.996020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.996186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.996343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.996368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.996500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.996627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.996655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.996800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.996933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.996958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.997089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.997229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.997257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.997394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.997529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.997555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.997705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.997847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.997875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.998035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.998183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.998211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.998352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.998468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.998497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.998619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.998783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.998809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.998968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.999107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.999136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.999261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.999391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.999423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.999556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.999684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:56.999710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:56.999881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.000043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.000072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.000193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.000320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.000345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.000541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.000706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.000735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.000891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.001032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.001061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.001232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.001361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.001394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.001528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.001688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.001717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.001856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.001991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.002020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.002193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.002329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.002358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.002556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.002763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.002809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.002989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.003170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.003199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.003337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.003490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.003520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.003642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.003791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.003817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.003954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.004087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.004113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.004223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.004376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.004413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.004569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.004717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.421 [2024-04-18 17:07:57.004743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.421 qpair failed and we were unable to recover it. 00:20:41.421 [2024-04-18 17:07:57.004890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.005028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.005056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.005238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.005377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.005411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.005545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.005648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.005674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.005855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.005996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.006025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.006170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.006351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.006379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.006542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.006658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.006684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.006870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.007040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.007069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.007184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.007361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.007397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.007573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.007686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.007715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.007881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.008014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.008056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.008196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.008365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.008411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.008598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.008723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.008752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.008898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.009006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.009035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.009178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.009317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.009344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.009489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.009619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.009655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.009843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.009996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.010025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.010140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.010285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.010313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.010495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.010630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.010656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.010849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.010957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.010983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.011115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.011241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.011267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.011430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.011585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.011613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.011810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.011986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.012015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.012164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.012319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.012345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.012467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.012633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.012668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.012895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.013048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.013077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.013251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.013409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.013449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.013627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.013738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.013767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.014002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.014182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.014208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.014317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.014465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.014492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.014650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.014824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.014849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.014988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.015145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.015173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.015327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.015467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.015493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.015627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.015767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.015793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.015899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.016032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.016058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.016169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.016297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.016323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.016422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.016605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.016633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.016867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.017001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.017063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.422 qpair failed and we were unable to recover it. 00:20:41.422 [2024-04-18 17:07:57.017219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.422 [2024-04-18 17:07:57.017348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.017374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.017528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.017684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.017712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.017932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.018084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.018113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.018331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.018490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.018517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.018652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.018787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.018813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.018944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.019076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.019102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.019308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.019445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.019472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.019646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.019822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.019848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.020005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.020162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.020189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.020389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.020509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.020538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.020695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.020831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.020857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.021000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.021137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.021166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.021322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.021427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.021453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.021572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.021727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.021756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.021967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.022130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.022159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.022331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.022508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.022537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.022704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.022846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.022872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.023027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.023201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.023227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.023355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.023538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.023568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.023742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.023856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.023885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.024061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.024191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.024237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.024415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.024569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.024598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.024748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.024863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.024892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.025013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.025183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.025212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.025359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.025503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.025529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.025658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.025768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.025796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.025954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.026090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.026117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.026253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.026414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.026450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.026602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.026746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.026774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.026970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.027104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.027130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.027300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.027524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.027553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.027697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.027848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.027876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.028025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.028185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.028226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.028365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.028547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.028576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.423 qpair failed and we were unable to recover it. 00:20:41.423 [2024-04-18 17:07:57.028818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.423 [2024-04-18 17:07:57.029091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.029142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.029260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.029400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.029434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.029571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.029687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.029723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.029857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.029965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.029995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.030103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.030204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.030230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.030335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.030468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.030494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.030632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.030734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.030760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.030943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.031099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.031126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.031286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.031427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.031453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.031606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.031790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.031816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.031950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.032116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.032142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.032295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.032432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.032459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.032596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.032819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.032879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.033045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.033198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.033223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.033393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.033525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.033568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.033704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.033859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.033884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.034024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.034204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.034232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.034406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.034528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.034556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.034722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.034863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.034889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.035022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.035128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.035154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.035260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.035408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.035444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.035602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.035802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.035827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.035957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.036115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.036140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.036311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.036444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.036470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.036601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.036743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.036768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.036921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.037063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.037091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.037270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.037377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.037409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.037576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.037719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.037748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.037859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.038001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.038029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.038147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.038305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.038334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.038508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.038639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.038665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.038795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.038949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.038975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.039110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.039255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.039281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.039410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.039554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.039579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.039785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.039911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.039937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.040098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.040249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.040277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.040549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.040698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.040724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.040840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.040991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.041019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.041154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.041290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.041315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.424 qpair failed and we were unable to recover it. 00:20:41.424 [2024-04-18 17:07:57.041426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.424 [2024-04-18 17:07:57.041531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.041557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.041719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.041908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.041973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.042151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.042284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.042310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.042481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.042609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.042647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.042824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.042971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.042999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.043160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.043307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.043336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.043492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.043675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.043701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.043827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.043932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.043958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.044068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.044179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.044204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.044341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.044503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.044533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.044692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.044853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.044895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.045027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.045158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.045184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.045315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.045416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.045443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.045570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.045753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.045779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.045962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.046060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.046086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.046218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.046349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.046378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.046544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.046728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.046756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.046876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.047039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.047065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.047222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.047351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.047377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.047522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.047646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.047672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.047803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.047913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.047941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.048064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.048237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.048266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.048440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.048581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.048606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.048741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.048875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.048901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.049057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.049182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.049208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.049394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.049502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.049532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.049667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.049764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.049790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.049919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.050079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.050104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.050286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.050432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.050461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.050602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.050746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.050774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.050925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.051089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.051115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.051251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.051391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.051417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.051619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.051756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.051784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.052017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.052183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.052211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.052352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.052534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.052563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.052739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.052897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.052923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.053056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.053227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.053255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.053442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.053576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.053603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.053737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.053837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.425 [2024-04-18 17:07:57.053862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.425 qpair failed and we were unable to recover it. 00:20:41.425 [2024-04-18 17:07:57.054010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.054132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.054158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.054321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.054446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.054472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.054612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.054746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.054787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.054910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.055028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.055057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.055178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.055292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.055318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.055420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.055595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.055624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.055781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.055910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.055936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.056058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.056171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.056199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.056421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.056523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.056549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.056700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.056871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.056899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.057177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.057371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.057410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.057565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.057701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.057727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.057862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.057963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.057989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.058117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.058246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.058274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.058420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.058532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.058560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.058702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.058836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.058864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.059045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.059142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.059167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.059292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.059470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.059497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.059654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.059808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.059833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.059988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.060131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.060159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.060348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.060562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.060592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.060767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.060870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.060896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.061027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.061134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.061159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.061286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.061414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.061439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.061563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.061691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.061716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.061838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.062007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.062035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.062212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.062360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.062396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.062515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.062692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.062721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.062878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.063011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.063037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.063182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.063353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.063389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.063538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.063677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.063705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.063846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.063986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.064015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.064178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.064336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.064361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.064550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.064709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.064734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.064899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.065002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.065027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.065158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.065308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.065336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.065506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.065636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.065662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.065779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.065892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.065924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.066075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.066253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.066279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.066451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.066551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.066576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.066704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.066809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.066835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.426 qpair failed and we were unable to recover it. 00:20:41.426 [2024-04-18 17:07:57.067004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.426 [2024-04-18 17:07:57.067138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.067167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.067342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.067488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.067517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.067664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.067834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.067862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.067991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.068098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.068123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.068235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.068369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.068414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.068561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.068743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.068772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.068931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.069066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.069091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.069265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.069365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.069398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.069565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.069734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.069762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.070015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.070190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.070217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.070379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.070514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.070540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.070668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.070807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.070833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.070959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.071076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.071104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.071256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.071358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.071390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.071529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.071688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.071714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.071874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.072034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.072075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.072189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.072377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.072417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.072529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.072687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.072712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.072840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.072999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.073027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.073156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.073312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.073340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.073490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.073625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.073651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.073781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.073912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.073937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.074068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.074224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.074252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.074406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.074529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.074555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.074707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.074845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.074873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.075044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.075188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.075216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.075341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.075505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.075531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.075666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.075800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.075825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.075986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.076120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.076148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.076262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.076409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.076450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.076616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.076761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.076788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.076957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.077090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.077133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.077301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.077436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.077463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.077602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.077753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.077782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.077932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.078029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.078056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.427 [2024-04-18 17:07:57.078159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.078316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.427 [2024-04-18 17:07:57.078342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.427 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.078538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.078707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.078749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.078974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.079143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.079172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.079324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.079438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.079467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.079646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.079782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.079825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.079967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.080138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.080168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.080317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.080465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.080493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.080641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.080761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.080790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.080940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.081050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.081076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.081216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.081394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.081433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.081544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.081659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.081701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.081844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.082010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.082035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.082164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.082265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.082297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.082414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.082536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.082577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.082704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.082834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.082860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.082995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.083152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.083181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.083343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.083494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.083520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.083631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.083737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.083762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.083883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.084046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.084074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.084189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.084348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.084377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.084538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.084664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.084690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.084826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.084963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.084989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.085156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.085323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.085349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.085517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.085635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.428 [2024-04-18 17:07:57.085663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.428 qpair failed and we were unable to recover it. 00:20:41.428 [2024-04-18 17:07:57.085839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.705 [2024-04-18 17:07:57.086021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.086051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.086170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.086273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.086314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.086458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.086626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.086657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.086855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.086995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.087032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.087167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.087267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.087293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.087442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.087574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.087602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.087748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.087897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.087925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.088071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.088182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.088210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.088363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.088470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.088496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.088650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.088788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.088816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.088997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.089139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.089168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.089289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.089458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.089484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.089587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.089692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.089717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.089818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.089917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.089942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.090046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.090208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.090237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.090342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.090491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.090517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.090674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.090783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.090810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.090940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.091074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.091100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.091234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.091425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.091452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.091592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.091785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.091814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.091961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.092094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.092120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.092296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.092478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.092505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.092604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.092708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.092734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.092841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.092996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.093022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.706 qpair failed and we were unable to recover it. 00:20:41.706 [2024-04-18 17:07:57.093183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.093310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.706 [2024-04-18 17:07:57.093337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.093501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.093605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.093632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.093747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.093850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.093877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.094048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.094179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.094214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.094316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.094450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.094476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.094605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.094772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.094801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.094939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.095078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.095106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.095253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.095412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.095464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.095599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.095729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.095755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.095914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.096046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.096072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.096227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.096402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.096435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.096573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.096691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.096719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.096904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.097042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.097068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.097170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.097331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.097357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.097531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.097699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.097777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.097918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.098061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.098095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.098266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.098391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.098434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.098594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.098691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.098734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.098883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.099020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.099046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.099189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.099338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.099367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.099562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.099695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.099721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.099897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.100034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.100063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.100249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.100408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.100445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.100584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.100767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.100796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.100948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.101080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.101107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.101246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.101397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.101435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.101585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.101707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.101748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.101910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.102062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.102091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.707 qpair failed and we were unable to recover it. 00:20:41.707 [2024-04-18 17:07:57.102274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.707 [2024-04-18 17:07:57.102409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.102458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.102608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.102771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.102797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.102906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.103052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.103081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.103218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.103369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.103402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.103540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.103678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.103704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.103838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.103999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.104028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.104173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.104344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.104373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.104582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.104696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.104724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.104862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.105022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.105047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.105209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.105394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.105423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.105566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.105766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.105817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.105981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.106116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.106142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.106274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.106415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.106442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.106601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.106753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.106782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.106948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.107120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.107150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.107290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.107449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.107475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.107580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.107729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.107755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.107936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.108053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.108082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.108261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.108407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.108448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.108595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.108742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.108771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.108892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.109028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.109054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.109234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.109476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.109503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.109618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.109776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.109804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.109974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.110125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.110154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.708 qpair failed and we were unable to recover it. 00:20:41.708 [2024-04-18 17:07:57.110277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.110379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.708 [2024-04-18 17:07:57.110414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.110593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.110742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.110771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.110926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.111046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.111075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.111224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.111372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.111407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.111572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.111707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.111733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.111871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.112016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.112045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.112218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.112340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.112368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.112548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.112697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.112726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.112897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.113073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.113102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.113253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.113404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.113443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.113614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.113839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.113897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.114012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.114165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.114191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.114291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.114455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.114481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.114666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.114810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.114839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.114984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.115103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.115132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.115291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.115449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.115475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.115579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.115718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.115744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.115872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.116016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.116045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.116191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.116361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.116396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.116549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.116720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.116748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.116902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.117033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.117058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.117163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.117264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.117289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.117444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.117592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.117620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.117758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.117930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.117958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.118090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.118229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.118259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.118392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.118557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.118583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.118798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.119004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.119053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.119174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.119337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.119363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.119517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.119622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.709 [2024-04-18 17:07:57.119661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.709 qpair failed and we were unable to recover it. 00:20:41.709 [2024-04-18 17:07:57.119789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.119923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.119949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.120064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.120219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.120247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.120431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.120572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.120599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.120750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.120915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.120957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.121111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.121253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.121279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.121415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.121574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.121602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.121715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.121892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.121922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.122106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.122212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.122239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.122399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.122579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.122607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.122750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.122886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.122915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.123067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.123255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.123281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.123426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.123532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.123558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.123655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.123838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.123866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.124039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.124204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.124231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.124361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.124506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.124531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.124664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.124802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.124828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.124981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.125125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.125154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.125328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.125455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.125482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.125584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.125701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.125726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.125852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.125979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.126005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.126138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.126242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.126269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.126441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.126544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.126569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 [2024-04-18 17:07:57.126702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.126855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.710 [2024-04-18 17:07:57.126880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:41.710 qpair failed and we were unable to recover it. 00:20:41.710 Read completed with error (sct=0, sc=8) 00:20:41.710 starting I/O failed 00:20:41.710 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 [2024-04-18 17:07:57.127215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Write completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 Read completed with error (sct=0, sc=8) 00:20:41.711 starting I/O failed 00:20:41.711 [2024-04-18 17:07:57.127555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:41.711 [2024-04-18 17:07:57.127697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268860 is same with the state(5) to be set 00:20:41.711 [2024-04-18 17:07:57.127904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.711 [2024-04-18 17:07:57.128046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.711 [2024-04-18 17:07:57.128077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.711 qpair failed and we were unable to recover it. 00:20:41.711 [2024-04-18 17:07:57.128208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.711 [2024-04-18 17:07:57.128401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.711 [2024-04-18 17:07:57.128443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.711 qpair failed and we were unable to recover it. 00:20:41.711 [2024-04-18 17:07:57.128579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.711 [2024-04-18 17:07:57.128720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.711 [2024-04-18 17:07:57.128752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.711 qpair failed and we were unable to recover it. 00:20:41.711 [2024-04-18 17:07:57.128930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.711 [2024-04-18 17:07:57.129076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.711 [2024-04-18 17:07:57.129106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.711 qpair failed and we were unable to recover it. 00:20:41.711 [2024-04-18 17:07:57.129254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.711 [2024-04-18 17:07:57.129406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.711 [2024-04-18 17:07:57.129450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.711 qpair failed and we were unable to recover it. 00:20:41.711 [2024-04-18 17:07:57.129558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.711 [2024-04-18 17:07:57.129723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.711 [2024-04-18 17:07:57.129749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.711 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.129916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.130084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.130114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.130286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.130424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.130452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.130593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.130731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.130758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.131038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.131235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.131265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.131426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.131567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.131593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.131762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.131919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.131948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.132119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.132297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.132332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.132471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.132606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.132632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.132796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.132927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.132954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.133106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.133243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.133273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.133435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.133545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.133571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.133683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.133821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.133847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.134010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.134172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.134202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.134357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.134478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.134504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.134616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.134762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.134789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.134996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.135168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.135198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.135358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.135557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.135589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.135759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.135888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.135914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.136068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.136283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.136313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.136471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.136581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.136608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.136733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.136903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.136933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.137052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.137203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.137234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.137354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.137489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.137517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.137730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.137898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.137927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.138151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.138412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.138461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.138593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.138719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.138746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.138934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.139117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.139148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.139331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.139491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.139519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.712 qpair failed and we were unable to recover it. 00:20:41.712 [2024-04-18 17:07:57.139659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.139796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.712 [2024-04-18 17:07:57.139824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.140020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.140173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.140203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.140322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.140480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.140507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.140610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.140759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.140788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.140941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.141050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.141077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.141285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.141474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.141502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.141656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.141804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.141833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.141980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.142107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.142134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.142251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.142368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.142403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.142530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.142671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.142701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.142879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.143039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.143068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.143222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.143346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.143391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.143577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.143747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.143777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.143930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.144106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.144147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.144326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.144476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.144506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.144624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.144802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.144831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.145032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.145171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.145197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.145368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.145531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.145561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.145722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.145903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.145947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.146130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.146264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.146292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.146531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.146682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.146712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.146886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.147068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.147095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.147226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.147361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.147393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.147555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.147678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.147709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.147858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.148003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.148033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.148253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.148396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.148423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.148531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.148664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.148691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.148836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.149006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.149036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.149246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.149399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.713 [2024-04-18 17:07:57.149426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.713 qpair failed and we were unable to recover it. 00:20:41.713 [2024-04-18 17:07:57.149558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.149691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.149718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.149894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.150030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.150059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.150237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.150428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.150459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.150632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.150773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.150802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.150951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.151085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.151113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.151274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.151409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.151437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.151572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.151808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.151837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.152014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.152154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.152184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.152308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.152449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.152476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.152617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.152779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.152808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.153037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.153174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.153203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.153361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.153507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.153533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.153686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.153800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.153827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.153932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.154061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.154091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.154274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.154376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.154411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.154572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.154717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.154747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.154920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.155036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.155064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.155223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.155330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.155357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.155507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.155685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.155713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.155852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.155966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.155993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.156161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.156313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.156342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.156510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.156617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.156646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.156816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.156974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.157000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.157157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.157287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.157312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.157457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.157615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.157641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.157769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.157933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.157959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.158115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.158257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.158300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.158448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.158604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.158633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.714 [2024-04-18 17:07:57.158771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.158925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.714 [2024-04-18 17:07:57.158953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.714 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.159129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.159290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.159316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.159477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.159615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.159644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.159803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.159938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.159964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.160098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.160204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.160230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.160343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.160506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.160533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.160692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.160810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.160838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.161022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.161125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.161153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.161362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.161529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.161572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.161717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.161868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.161897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.162028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.162159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.162186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.162339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.162490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.162533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.162672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.162817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.162847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.163029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.163202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.163231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.163343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.163463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.163492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.163619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.163771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.163800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.163962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.164126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.164152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.164349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.164490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.164517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.164655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.164804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.164831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.164966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.165125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.165169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.165312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.165440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.165467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.165636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.165834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.165860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.165994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.166133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.166159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.166313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.166427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.166457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.715 [2024-04-18 17:07:57.166624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.166853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.715 [2024-04-18 17:07:57.166902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.715 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.167059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.167179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.167205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.167304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.167439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.167465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.167620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.167766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.167795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.167951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.168089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.168115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.168278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.168439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.168467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.168583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.168728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.168754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.168911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.169012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.169038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.169170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.169298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.169323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.169489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.169616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.169641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.169800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.169959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.169985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.170090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.170248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.170274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.170404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.170519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.170547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.170734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.170862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.170902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.171085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.171228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.171254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.171435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.171598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.171624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.171795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.171934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.171959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.172097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.172250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.172278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.172399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.172585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.172611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.172774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.172888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.172914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.173047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.173197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.173224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.173347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.173546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.173574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.173687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.173816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.173846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.174016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.174199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.174228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.174372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.174569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.174594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.174706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.174863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.174888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.175039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.175204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.175233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.175378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.175512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.175539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.175675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.175809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.716 [2024-04-18 17:07:57.175838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.716 qpair failed and we were unable to recover it. 00:20:41.716 [2024-04-18 17:07:57.175991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.176129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.176156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.176274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.176418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.176447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.176633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.176772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.176815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.176990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.177136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.177164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.177314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.177479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.177505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.177647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.177750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.177775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.177909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.178046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.178074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.178199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.178358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.178392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.178533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.178638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.178664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.178803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.178950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.178982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.179137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.179293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.179322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.179453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.179618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.179644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.179802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.179968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.179996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.180103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.180248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.180276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.180425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.180535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.180561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.180690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.180827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.180856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.180977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.181148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.181177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.181305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.181446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.181481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.181658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.181805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.181833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.181970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.182132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.182162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.182271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.182434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.182461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.182611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.182784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.182812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.182945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.183077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.183102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.183240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.183399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.183443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.183611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.183755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.183782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.183988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.184121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.184146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.184308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.184468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.184495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.184654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.184821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.184847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.717 [2024-04-18 17:07:57.185006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.185176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.717 [2024-04-18 17:07:57.185205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.717 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.185392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.185542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.185572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.185711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.185852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.185877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.186035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.186153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.186181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.186309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.186444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.186470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.186580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.186706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.186736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.186917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.187089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.187118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.187297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.187513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.187562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.187679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.187825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.187856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.188038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.188160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.188190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.188325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.188436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.188463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.188641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.188880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.188931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.189094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.189227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.189253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.189412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.189548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.189591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.189772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.189983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.190036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.190216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.190393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.190423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.190576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.190710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.190736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.190891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.191048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.191073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.191234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.191394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.191423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.191581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.191737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.191778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.191924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.192115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.192142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.192279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.192459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.192488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.718 [2024-04-18 17:07:57.192655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.192821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.718 [2024-04-18 17:07:57.192847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.718 qpair failed and we were unable to recover it. 00:20:41.725 [2024-04-18 17:07:57.192979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.193123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.193152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.725 qpair failed and we were unable to recover it. 00:20:41.725 [2024-04-18 17:07:57.193296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.193455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.193482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.725 qpair failed and we were unable to recover it. 00:20:41.725 [2024-04-18 17:07:57.193639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.193766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.193792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.725 qpair failed and we were unable to recover it. 00:20:41.725 [2024-04-18 17:07:57.193952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.194139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.194165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.725 qpair failed and we were unable to recover it. 00:20:41.725 [2024-04-18 17:07:57.194298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.194406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.194433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.725 qpair failed and we were unable to recover it. 00:20:41.725 [2024-04-18 17:07:57.194532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.194662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.194688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.725 qpair failed and we were unable to recover it. 00:20:41.725 [2024-04-18 17:07:57.194844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.194987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.195015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.725 qpair failed and we were unable to recover it. 00:20:41.725 [2024-04-18 17:07:57.195147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.195288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.725 [2024-04-18 17:07:57.195317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.725 qpair failed and we were unable to recover it. 00:20:41.725 [2024-04-18 17:07:57.195451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.195586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.195611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.195798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.195920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.195949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.196078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.196215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.196244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.196391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.196528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.196554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.196732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.196854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.196882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.197024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.197144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.197172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.197329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.197453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.197481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.197614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.197766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.197796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.197945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.198060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.198090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.198238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.198354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.198387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.198546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.198675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.198704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.198852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.198990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.199020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.199179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.199290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.199316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.199452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.199555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.199581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.199699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.199854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.199883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.200040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.200174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.200202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.200333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.200465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.200494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.200663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.200814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.200840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.200969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.201102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.201128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.201289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.201480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.201508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.201620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.201756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.201782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.201884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.202037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.202063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.202202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.202369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.202406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.202523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.202629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.202658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.202781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.202928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.202954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.203069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.203208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.203237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.726 qpair failed and we were unable to recover it. 00:20:41.726 [2024-04-18 17:07:57.203361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.726 [2024-04-18 17:07:57.203532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.203559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.203660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.203790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.203816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.203960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.204141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.204168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.204278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.204392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.204420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.204533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.204627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.204651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.204815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.204972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.205001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.205135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.205269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.205297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.205431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.205593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.205619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.205724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.205844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.205872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.206010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.206156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.206184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.206340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.206476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.206503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.206646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.206753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.206779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.206924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.207035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.207065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.207223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.207333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.207359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.207539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.207711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.207749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.207904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.208086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.208121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.208262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.208367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.208405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.208551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.208683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.208720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.208883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.209050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.209081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.209193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.209336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.209364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.209498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.209621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.209651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.209796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.209968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.209997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.210126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.210234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.210260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.210401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.210564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.210595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.210712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.210823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.210853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.211013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.211121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.211149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.211295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.211453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.211481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.211668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.211825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.727 [2024-04-18 17:07:57.211852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.727 qpair failed and we were unable to recover it. 00:20:41.727 [2024-04-18 17:07:57.211992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.212126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.212153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.212345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.212510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.212537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.212697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.212823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.212850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.212976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.213083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.213110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.213254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.213376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.213448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.213611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.213782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.213808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.213967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.214144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.214172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.214315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.214466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.214496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.214656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.214790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.214817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.214970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.215082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.215109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.215246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.215378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.215414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.215568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.215701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.215727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.215859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.215995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.216021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.216149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.216298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.216327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.216485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.216604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.216630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.216763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.216898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.216940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.217122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.217245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.217271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.217429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.217579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.217609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.217731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.217866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.217893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.218056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.218185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.218214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.218358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.218515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.218544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.218697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.218835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.218861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.218966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.219095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.219122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.219252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.219433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.219461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.219567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.219710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.219736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.728 qpair failed and we were unable to recover it. 00:20:41.728 [2024-04-18 17:07:57.219960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.728 [2024-04-18 17:07:57.220130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.220159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.220341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.220450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.220478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.220613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.220722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.220749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.220851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.220998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.221027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.221149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.221261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.221291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.221466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.221599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.221625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.221761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.221874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.221903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.222014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.222154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.222182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.222339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.222488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.222533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.222672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.222847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.222873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.222999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.223125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.223158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.223283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.223394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.223421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.223550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.223672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.223702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.223878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.224019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.224048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.224179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.224343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.224370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.224506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.224653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.224682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.224827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.225005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.225031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.225163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.225301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.225327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.225513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.225647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.225673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.225828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.225934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.225961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.729 qpair failed and we were unable to recover it. 00:20:41.729 [2024-04-18 17:07:57.226099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.226212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.729 [2024-04-18 17:07:57.226238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.226345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.226510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.226539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.226688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.226870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.226901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.227009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.227146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.227172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.227326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.227447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.227478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.227617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.227726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.227756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.227873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.228003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.228030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.228213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.228331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.228360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.228521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.228659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.228706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.228846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.228947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.228973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.229134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.229298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.229326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.229456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.229566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.229592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.229727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.229868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.229898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.230024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.230151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.230178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.230292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.230396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.230435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.230578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.230721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.230763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.230922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.231062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.231088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.231211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.231334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.231360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.231559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.231685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.231716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.231836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.231973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.231999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.232131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.232262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.232287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.232444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.232579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.232605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.232763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.232890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.232930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.233055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.233172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.233200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.730 [2024-04-18 17:07:57.233352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.233531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.730 [2024-04-18 17:07:57.233557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.730 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.233675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.233833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.233859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.233993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.234129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.234157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.234287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.234426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.234453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.234554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.234697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.234723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.234884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.235030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.235058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.235215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.235333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.235361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.235549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.235660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.235701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.235832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.235985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.236033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.236181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.236293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.236322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.236486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.236594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.236621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.236782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.236897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.236926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.237053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.237204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.237232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.237450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.237552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.237579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.237699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.237862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.237890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.238004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.238143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.238172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.238302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.238403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.238440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.238539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.238649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.238675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.238827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.238997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.239026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.239159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.239321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.239349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.239527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.239647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.239673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.239794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.239946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.239975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.240122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.240233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.240261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.240392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.240524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.240551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.240690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.240849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.240891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.241086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.241203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.241232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.241361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.241532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.731 [2024-04-18 17:07:57.241558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.731 qpair failed and we were unable to recover it. 00:20:41.731 [2024-04-18 17:07:57.241699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.241873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.241901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.242080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.242237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.242265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.242413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.242561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.242588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.242714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.242838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.242867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.242996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.243129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.243159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.243312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.243451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.243478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.243584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.243726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.243752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.243886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.244008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.244037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.244154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.244263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.244291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.244448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.244578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.244604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.244714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.244854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.244880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.245001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.245126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.245154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.245274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.245444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.245470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.245584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.245712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.245738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.245898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.246045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.246071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.246237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.246338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.246364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.246481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.246589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.246616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.246776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.246933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.246962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.247116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.247245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.247271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.247409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.247580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.247605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.247742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.247886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.247915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.248075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.248219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.248247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.248378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.248491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.248517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.248664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.248783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.248811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.248928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.249038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.732 [2024-04-18 17:07:57.249067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.732 qpair failed and we were unable to recover it. 00:20:41.732 [2024-04-18 17:07:57.249192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.249324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.249349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.249498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.249621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.249647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.249770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.249939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.249967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.250146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.250281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.250306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.250438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.250550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.250575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.250705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.250810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.250835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.250968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.251081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.251107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.251241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.251370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.251405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.251520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.251653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.251680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.251816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.251926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.251951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.252116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.252220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.252245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.252380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.252502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.252529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.252658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.252781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.252806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.252930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.253041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.253069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.253239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.253397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.253427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.253582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.253720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.253747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.253852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.253979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.254024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.254181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.254312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.254337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.254480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.254584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.254610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.254726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.254869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.254898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.255022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.255173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.255198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.255342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.255450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.255476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.255581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.255735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.255764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.255879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.256016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.256045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.733 qpair failed and we were unable to recover it. 00:20:41.733 [2024-04-18 17:07:57.256169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.733 [2024-04-18 17:07:57.256274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.256300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.256418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.256574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.256600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.256764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.256876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.256904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.257041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.257173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.257200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.257370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.257508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.257534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.257674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.257798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.257826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.257961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.258091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.258117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.258251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.258404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.258431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.258562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.258719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.258747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.258876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.259009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.259035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.259181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.259291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.259320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.259425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.259570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.259598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.259722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.259832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.259857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.260026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.260161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.260187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.260319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.260467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.260493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.260624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.260762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.260788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.260892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.260995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.261022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.261150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.261330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.261358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.261547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.261676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.261707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.261873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.262040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.262086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.262217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.262418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.262445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.262584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.262700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.262727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.262858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.263019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.263045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.263180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.263329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.263355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.263474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.263587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.263613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.263746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.263850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.263877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.263984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.264143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.264169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.734 qpair failed and we were unable to recover it. 00:20:41.734 [2024-04-18 17:07:57.264300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.264429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.734 [2024-04-18 17:07:57.264455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.264589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.264699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.264725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.264860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.264993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.265021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.265167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.265309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.265338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.265498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.265651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.265677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.265831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.265975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.266003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.266184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.266359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.266397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.266528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.266659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.266685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.266828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.266952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.266980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.267117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.267230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.267261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.267428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.267577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.267603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.267718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.267820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.267846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.268016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.268139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.268170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.268318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.268501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.268527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.268633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.268761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.268787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.268935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.269104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.269132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.269281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.269408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.269451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.269562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.269666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.269693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.735 qpair failed and we were unable to recover it. 00:20:41.735 [2024-04-18 17:07:57.269820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.735 [2024-04-18 17:07:57.269961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.269987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.270150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.270264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.270294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.270457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.270584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.270610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.270736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.270883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.270911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.271060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.271169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.271197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.271353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.271492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.271519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.271624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.271783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.271809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.271982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.272100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.272128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.272288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.272425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.272452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.272588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.272702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.272728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.272923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.273043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.273071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.273216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.273344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.273373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.273540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.273640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.273686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.273860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.273972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.274001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.274113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.274255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.274284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.274413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.274546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.274572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.274683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.274808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.274837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.274958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.275088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.275129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.275257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.275363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.275400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.275557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.275686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.275713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.275861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.276034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.276062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.276228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.276370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.276408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.276563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.276691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.276717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.276869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.277016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.277045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.277185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.277303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.277332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.277479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.277611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.277638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.277763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.277873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.277902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.278141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.278260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.278288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.736 [2024-04-18 17:07:57.278403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.278555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.736 [2024-04-18 17:07:57.278592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.736 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.278732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.278919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.278947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.279083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.279221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.279247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.279428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.279561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.279586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.279702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.279870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.279898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.280107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.280257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.280286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.280405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.280555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.280583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.280736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.280870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.280899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.281090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.281229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.281258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.281389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.281521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.281547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.281695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.281815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.281848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.282057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.282196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.282225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.282339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.282487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.282514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.282646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.282803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.282832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.283006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.283165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.283193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.283316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.283440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.283466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.283576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.283689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.283718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.283845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.283953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.283979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.284106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.284267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.284295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.284474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.284586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.284612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.284736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.284837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.284862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.284986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.285146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.285174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.285324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.285480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.285507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.285621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.285720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.285746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.285905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.286047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.286075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.286208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.286360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.286419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.286546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.286650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.286677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.286824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.286969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.286995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.287137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.287267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.287293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.737 qpair failed and we were unable to recover it. 00:20:41.737 [2024-04-18 17:07:57.287424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.737 [2024-04-18 17:07:57.287553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.287579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.287703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.287819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.287847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.287991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.288163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.288191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.288346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.288460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.288486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.288610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.288756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.288782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.288901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.289047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.289075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.289224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.289334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.289360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.289529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.289655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.289681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.289808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.289982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.290012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.290158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.290291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.290318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.290449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.290580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.290605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.290716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.290870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.290899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.291033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.291169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.291195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.291398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.291534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.291562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.291667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.291837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.291865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.292018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.292149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.292175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.292326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.292480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.292509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.292635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.292807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.292835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.292968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.293111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.293136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.293270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.293403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.293432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.293551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.293674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.293703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.293877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.293984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.294010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.294166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.294315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.294343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.294475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.294607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.294632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.294777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.294880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.294906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.295045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.295165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.295193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.295314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.295462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.295492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.738 qpair failed and we were unable to recover it. 00:20:41.738 [2024-04-18 17:07:57.295619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.738 [2024-04-18 17:07:57.295721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.295747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.295921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.296060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.296088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.296265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.296379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.296414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.296565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.296668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.296694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.296826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.296928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.296954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.297089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.297200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.297226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.297359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.297468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.297494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.297593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.297694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.297720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.297823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.297948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.297973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.298081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.298240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.298265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.298414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.298588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.298616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.298742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.298882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.298910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.299064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.299177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.299203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.299310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.299424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.299468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.299614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.299741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.299769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.299937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.300045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.300072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.300214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.300329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.300357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.300514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.300671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.300697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.300843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.301009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.301035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.301151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.301304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.301330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.301513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.301647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.301672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.301805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.301937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.301962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.302069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.302181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.302207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.302337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.302478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.302504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.302662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.302785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.302810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.739 qpair failed and we were unable to recover it. 00:20:41.739 [2024-04-18 17:07:57.302991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.303130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.739 [2024-04-18 17:07:57.303159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.303272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.303404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.303431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.303565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.303698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.303725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.303863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.303990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.304019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.304200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.304347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.304375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.304562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.304696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.304722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.304877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.304989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.305019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.305204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.305358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.305397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.305521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.305657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.305683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.305851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.306016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.306042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.306191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.306339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.306371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.306563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.306711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.306756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.306906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.307026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.307056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.307202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.307375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.307415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.307569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.307728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.307754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.307894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.308025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.308054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.308226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.308374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.308414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.308566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.308673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.308699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.308822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.308982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.309008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.309144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.309249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.309275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.309411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.309577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.309603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.309732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.309847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.309876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.310055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.310195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.310223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.310344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.310509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.310536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.310690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.310872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.310901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.311061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.311221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.311247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.740 qpair failed and we were unable to recover it. 00:20:41.740 [2024-04-18 17:07:57.311386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.311519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.740 [2024-04-18 17:07:57.311545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.311726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.311832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.311861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.312015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.312172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.312201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.312394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.312525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.312552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.312707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.312865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.312894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.313051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.313218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.313244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.313416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.313521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.313548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.313738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.313899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.313941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.314092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.314261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.314290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.314449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.314578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.314604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.314728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.314902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.314931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.315080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.315198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.315227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.315386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.315579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.315609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.315780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.315914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.315943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.316112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.316264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.316293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.316431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.316592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.316618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.316746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.316887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.316916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.317065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.317181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.317210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.317368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.317539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.317565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.317706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.317852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.317882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.318055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.318195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.318225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.318417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.318521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.318549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.318717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.318866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.318894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.319055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.319193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.319219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.319323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.319473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.319500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.319659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.319838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.319867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.320039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.320186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.320215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.320378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.320496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.320522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.320673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.320818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.320847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.741 qpair failed and we were unable to recover it. 00:20:41.741 [2024-04-18 17:07:57.321024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.741 [2024-04-18 17:07:57.321159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.321185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.321364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.321501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.321527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.321662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.321767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.321793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.321921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.322083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.322109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.322277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.322418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.322446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.322632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.322804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.322838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.322999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.323157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.323183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.323353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.323468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.323494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.323650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.323800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.323829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.323968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.324140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.324169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.324326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.324461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.324488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.324650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.324800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.324830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.324952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.325092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.325121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.325274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.325429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.325456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.325644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.325809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.325851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.325996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.326166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.326199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.326372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.326480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.326506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.326668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.326851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.326880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.327024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.327198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.327227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.327405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.327586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.327615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.327770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.327901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.327928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.328058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.328189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.328216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.328355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.328493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.328519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.328623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.328727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.328753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.328911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.329041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.329067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.329243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.329405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.329436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.329568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.329709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.329735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.329877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.330026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.330054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.742 qpair failed and we were unable to recover it. 00:20:41.742 [2024-04-18 17:07:57.330175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.742 [2024-04-18 17:07:57.330278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.330304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.330522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.330658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.330701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.330847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.330991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.331020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.331158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.331290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.331317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.331465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.331592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.331621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.331793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.331941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.331971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.332121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.332255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.332282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.332418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.332548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.332578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.332749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.332884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.332910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.333018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.333180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.333206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.333396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.333585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.333611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.333747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.333905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.333934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.334118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.334256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.334283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.334421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.334608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.334637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.334780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.334916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.334945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.335099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.335234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.335277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.335400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.335552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.335581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.335722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.335865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.335894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.336065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.336229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.336255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.336404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.336596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.336626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.336771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.336918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.336947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.337110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.337246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.337272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.337377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.337537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.337567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.337713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.337853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.337881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.338061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.338194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.338236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.338412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.338588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.338617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.338778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.338906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.338933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.339090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.339225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.339269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.743 [2024-04-18 17:07:57.339436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.339572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.743 [2024-04-18 17:07:57.339598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.743 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.339740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.339861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.339887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.340044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.340175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.340201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.340337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.340508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.340535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.340663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.340793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.340819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.340953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.341086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.341113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.341253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.341375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.341413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.341559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.341678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.341707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.341856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.341990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.342032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.342211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.342393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.342423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.342544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.342717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.342746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.342904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.343042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.343069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.343235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.343345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.343374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.343528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.343676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.343706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.343835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.343973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.343998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.344155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.344334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.344362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.344542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.344693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.344735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.344849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.345013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.345039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.345197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.345352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.345389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.345573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.345687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.345713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.345877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.346009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.346050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.346224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.346334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.346363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.346497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.346654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.346680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.346837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.346969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.347010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.347186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.347325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.347353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.744 [2024-04-18 17:07:57.347464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.347578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.744 [2024-04-18 17:07:57.347606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.744 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.347778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.347881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.347906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.348077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.348221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.348246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.348397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.348524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.348552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.348679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.348843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.348868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.349042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.349202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.349228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.349387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.349538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.349564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.349698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.349886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.349953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.350100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.350256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.350283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.350445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.350578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.350606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.350755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.350884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.350909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.351030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.351174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.351203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.351375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.351512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.351538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.351671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.351833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.351859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.352027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.352185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.352213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.352393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.352564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.352593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.352725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.352859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.352885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.353077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.353209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.353237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.353398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.353569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.353598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.353765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.353891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.353917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.354086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.354256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.354285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.354432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.354598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.354624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.354760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.354888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.354913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.355048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.355153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.355180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.355377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.355491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.355516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.355623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.355720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.355745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.355899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.356047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.356075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.356201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.356309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.356335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.356504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.356678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.356707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.356818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.356970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.356996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.357147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.357315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.357343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.357469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.357606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.745 [2024-04-18 17:07:57.357631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.745 qpair failed and we were unable to recover it. 00:20:41.745 [2024-04-18 17:07:57.357756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.357899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.357927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.358084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.358240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.358266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.358403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.358533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.358558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.358725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.358864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.358892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.359074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.359221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.359249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.359402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.359554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.359598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.359736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.359872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.359901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.360044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.360155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.360184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.360355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.360514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.360543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.360715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.360844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.360869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.361010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.361157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.361185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.361365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.361512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.361554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.361730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.361890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.361916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.362049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.362210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.362238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.362391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.362515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.362541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.362692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.362890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.362938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.363120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.363279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.363305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.363437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.363574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.363600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.363786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.363967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.363993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.364149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.364307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.364332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.364448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.364617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.364643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.364834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.364976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.365004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.365175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.365294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.365324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.365487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.365649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.365691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.365843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.366010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.366035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.366188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.366358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.366391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.366553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.366654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.366681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.366808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.366940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.366965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.367071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.367251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.367280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.367466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.367571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.367599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.367733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.367861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.367887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.746 qpair failed and we were unable to recover it. 00:20:41.746 [2024-04-18 17:07:57.368014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.746 [2024-04-18 17:07:57.368123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.368149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.368276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.368389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.368416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.368602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.368782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.368841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.368989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.369111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.369140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.369296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.369427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.369453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.369578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.369721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.369750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.369921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.370032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.370061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.370218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.370355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.370386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.370551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.370706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.370732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.370867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.371000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.371026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.371157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.371290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.371315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.371448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.371581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.371608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.371752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.371936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.372003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.372170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.372341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.372370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.372532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.372683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.372711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.372865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.372997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.373024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.373155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.373259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.373285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.373445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.373581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.373608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.373764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.373928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.373968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.374151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.374283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.374309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.374472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.374634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.374662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.374805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.374987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.375015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.375167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.375306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.375336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.375501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.375660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.375685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.375847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.375983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.376011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.376147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.376243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.376268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.376368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.376533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.376564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.376689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.376835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.376864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.377018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.377172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.377198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.377365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.377540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.377566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.377743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.377890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.747 [2024-04-18 17:07:57.377918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.747 qpair failed and we were unable to recover it. 00:20:41.747 [2024-04-18 17:07:57.378037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.378168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.378194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.378327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.378483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.378515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.378675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.378809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.378835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.378962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.379090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.379115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.379244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.379430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.379456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.379602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.379726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.379754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.379916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.380027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.380054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.380230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.380402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.380432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.380580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.380698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.380726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.380878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.381024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.381049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.381184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.381342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.381371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.381554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.381705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.381738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.381884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.382064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.382093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.382242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.382396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.382439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.382597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.382710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.382736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.382843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.383009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.383034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.383204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.383335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.383362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.383506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.383645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.383671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.383831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.383988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.384017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.384179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.384310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.384336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.384483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.384615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.384640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.384798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.384944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.384976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.385133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.385262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.385288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.385416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.385527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.385555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.748 qpair failed and we were unable to recover it. 00:20:41.748 [2024-04-18 17:07:57.385665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.385782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.748 [2024-04-18 17:07:57.385810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.385964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.386073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.386098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.386260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.386401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.386431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.386579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.386721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.386749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.386924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.387052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.387077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.387240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.387391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.387420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.387544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.387679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.387705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.387866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.388025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.388053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.388188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.388321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.388347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.388533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.388692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.388731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.388876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.388982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.389008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.389135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.389284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.389312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.389443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.389601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.389626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.389757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.389865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.389891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.390054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.390207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.390235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.390351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.390507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.390536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.390718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.390876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.390919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.391068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.391199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.391226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.391359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.391476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.391502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.391604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.391737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.391762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.391893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.391992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.392020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.392158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.392258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.392283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.392419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.392548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.392575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.392707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.392863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.749 [2024-04-18 17:07:57.392888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:41.749 qpair failed and we were unable to recover it. 00:20:41.749 [2024-04-18 17:07:57.393075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.393207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.393233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.393404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.393525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.393553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.393696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.393865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.393890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.394043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.394194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.394224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.394378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.394527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.394555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.394697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.394797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.394822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.394957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.395086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.395110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.395246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.395391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.395438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.395566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.395682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.395707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.395817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.395964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.395988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.396104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.396235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.396260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.396369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.396489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.396514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.396647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.396779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.396803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.396934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.397159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.034 [2024-04-18 17:07:57.397186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.034 qpair failed and we were unable to recover it. 00:20:42.034 [2024-04-18 17:07:57.397371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.397509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.397533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.397639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.397795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.397823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.397971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.398124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.398153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.398312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.398473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.398500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.398672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.398858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.398886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.399058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.399211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.399238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.399364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.399486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.399511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.400257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.400423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.400466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.400604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.400761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.400786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.400947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.401086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.401112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.401262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.401406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.401433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.401574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.401677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.401702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.401879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.402071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.402098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.402275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.402433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.402459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.402571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.402684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.402709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.402840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.402969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.402994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.403129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.403255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.403282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.403397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.403550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.403575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.403709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.403904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.403932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.404106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.404249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.404277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.404411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.404540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.404566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.404699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.404847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.404877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.405018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.405143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.405170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.405293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.405457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.405482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.405589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.405724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.405750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.405877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.406052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.406080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.406250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.406412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.406437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.406570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.406740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.406765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.406909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.407051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.407078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.407236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.407391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.407415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.407535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.407670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.407694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.407857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.407978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.408006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.408201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.408341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.408368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.408500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.408611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.408636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.408751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.408913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.408938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.409107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.409230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.409258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.035 qpair failed and we were unable to recover it. 00:20:42.035 [2024-04-18 17:07:57.409435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.035 [2024-04-18 17:07:57.409562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.409587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.409726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.409848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.409872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.410008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.410170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.410199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.410351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.410507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.410533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.410650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.410816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.410841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.410996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.411163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.411190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.411348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.411490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.411515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.411619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.411761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.411785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.411978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.412154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.412181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.412332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.412465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.412491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.412627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.412734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.412759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.412903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.413079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.413106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.413281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.413385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.413409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.413544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.413706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.413735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.413879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.414032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.414059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.414214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.414376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.414405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.414510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.414645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.414669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.414808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.414913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.414940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.415120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.415230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.415260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.415456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.415611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.415636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.415790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.415968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.415995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.416179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.416322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.416349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.416501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.416630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.416654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.416757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.416905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.416932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.417153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.417343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.417370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.417556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.417712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.417739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.417907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.418115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.418169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.418295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.418436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.418463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.418574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.418715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.418740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.418899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.419056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.419084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.419231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.419430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.419475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.419581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.419742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.419767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.419923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.420043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.420072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.420250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.420414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.420439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.420571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.420719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.420760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.420902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.421054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.036 [2024-04-18 17:07:57.421081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.036 qpair failed and we were unable to recover it. 00:20:42.036 [2024-04-18 17:07:57.421276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.421438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.421463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.421597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.421697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.421723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.421860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.422008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.422036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.422183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.422324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.422351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.422511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.422619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.422643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.422761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.422922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.422949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.423070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.423228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.423255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.423439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.423553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.423578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.423714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.423843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.423870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.424060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.424199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.424224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.424335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.424512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.424537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.424649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.424753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.424777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.424881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.425020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.425044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.425169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.425290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.425315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.425481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.425608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.425635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.425798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.425935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.425960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.426068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.426228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.426252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.426408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.426573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.426598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.426739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.426871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.426906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.427064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.427204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.427231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.427354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.427533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.427558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.427662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.427794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.427819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.427997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.428170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.428197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.428340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.428464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.428492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.428648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.428782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.428807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.428951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.429102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.429129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.429240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.429393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.429421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.429567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.429676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.429706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.429872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.430023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.430055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.430197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.430367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.430407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.430532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.430660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.430686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.430842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.430986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.431013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.431173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.431325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.431352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.434558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.434723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.434751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.434958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.435068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.435093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.435238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.435410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.435439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.037 [2024-04-18 17:07:57.435622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.435728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.037 [2024-04-18 17:07:57.435753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.037 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.435933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.436057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.436084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.436228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.436400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.436434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.436619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.436748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.436789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.436944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.437089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.437118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.437230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.437407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.437435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.437590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.437722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.437747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.437908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.438056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.438085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.438228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.438374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.438408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.438568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.438703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.438729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.438833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.438958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.438983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.439110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.439260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.439288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.439440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.439578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.439603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.439764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.439965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.440016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.440163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.440278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.440307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.440472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.440580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.440605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.440763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.440915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.440942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.441114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.441291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.441319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.441469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.441642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.441670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.441819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.441965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.441992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.442126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.442281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.442306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.442444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.442578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.442608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.442739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.442909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.442937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.443096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.443198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.443223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.443351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.443513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.443537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.443642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.443741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.443766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.443908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.444092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.444119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.444272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.444397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.444426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.444563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.444745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.444773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.444924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.445071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.445099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.038 qpair failed and we were unable to recover it. 00:20:42.038 [2024-04-18 17:07:57.445260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.038 [2024-04-18 17:07:57.445388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.445416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.445544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.445654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.445679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.445848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.445980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.446004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.446143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.446282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.446307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.446445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.446553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.446578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.446722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.446914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.446941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.447097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.447203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.447229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.447412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.447557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.447581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.447705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.447879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.447907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.448058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.448167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.448191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.448351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.448532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.448557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.448668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.448829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.448854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.448961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.449079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.449103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.449229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.449388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.449416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.449593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.449700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.449724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.449835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.449947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.449974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.450080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.450194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.450218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.450327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.450500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.450526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.450686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.450819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.450843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.450986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.451165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.451193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.451313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.451449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.451475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.451607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.451747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.451772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.451929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.452098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.452124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.452266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.452400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.452425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.452560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.452706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.452730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.452949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.453104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.453128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.453299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.453461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.453490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.453622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.453759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.453784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.453893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.453997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.454023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.454174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.454287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.454314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.454463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.454598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.454623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.454749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.454894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.454918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.455052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.455197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.455225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.455360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.455482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.455508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.455671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.455836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.455861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.456000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.456131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.456155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.456318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.456462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.456488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.456617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.456749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.456773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.039 [2024-04-18 17:07:57.456930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.457081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.039 [2024-04-18 17:07:57.457109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.039 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.457259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.457403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.457429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.457537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.457676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.457701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.457816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.457943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.457967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.458073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.458172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.458196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.458306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.458436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.458461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.458595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.458716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.458744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.458872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.459008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.459033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.459239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.459375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.459409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.459516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.459620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.459645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.459806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.459935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.459975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.460133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.460276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.460303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.460437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.460582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.460609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.460764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.460863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.460888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.461049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.461201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.461228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.461403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.461570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.461610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.461760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.461882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.461913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.462038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.462181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.462210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.462340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.462523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.462559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.462731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.462875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.462905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.463025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.463241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.463271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.463412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.463559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.463588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.463733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.463869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.463897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.464014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.464190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.464220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.464376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.464547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.464573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.464718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.464839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.464872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.465005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.465202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.465226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.465412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.465561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.465589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.465717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.465853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.465877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.466023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.466137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.466166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.466286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.466449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.466475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.466634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.466739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.466765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.466917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.467060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.467087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.467250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.467428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.467456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.467583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.467694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.467718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.467851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.468007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.468034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.468159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.468300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.468328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.468492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.468594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.468619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.040 [2024-04-18 17:07:57.468749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.468904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.040 [2024-04-18 17:07:57.468931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.040 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.469053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.469169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.469197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.469311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.469453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.469479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.469592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.469756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.469784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.469898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.470041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.470068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.470213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.470357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.470419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.470582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.470725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.470752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.470899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.471034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.471062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.471214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.471387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.471429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.471543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.471675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.471701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.471861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.472039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.472091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.472244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.472356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.472387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.472497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.472599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.472623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.472762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.472866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.472890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.472993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.473125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.473149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.473247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.473355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.473389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.473526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.473658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.473682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.473789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.473949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.473974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.474130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.474238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.474265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.474379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.474516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.474540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.474649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.474762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.474786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.474951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.475060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.475087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.475228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.475391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.475420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.475593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.475705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.475730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.475852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.476010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.476033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.476165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.476293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.476316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.476481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.476615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.476639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.476772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.476908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.476943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.477061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.477189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.477215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.477393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.477524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.477548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.477690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.477861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.477888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.478064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.478187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.478213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.478366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.478484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.478508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.041 [2024-04-18 17:07:57.478678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.478824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.041 [2024-04-18 17:07:57.478851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.041 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.479007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.479152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.479179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.479331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.479437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.479464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.479618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.479766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.479793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.479946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.480091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.480124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.480275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.480413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.480438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.480570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.480758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.480782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.480896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.481030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.481056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.481212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.481317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.481342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.481462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.481587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.481612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.481757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.481929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.481956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.482087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.482196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.482221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.482326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.482474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.482499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.482632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.482766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.482793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.482924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.483060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.483089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.483275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.483417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.483442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.483654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.483764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.483790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.483906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.484018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.484043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.484166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.484288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.484315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.484473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.484579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.484605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.484769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.484870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.484895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.485003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.485125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.485149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.485273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.485444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.485469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.485582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.485714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.485738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.485866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.486009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.486040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.486185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.486356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.486407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.486559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.486670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.486700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.486845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.486989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.487014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.487128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.487249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.487276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.487419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.487525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.487550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.487661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.487774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.487799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.487903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.488005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.488030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.488164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.488293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.488320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.488461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.488564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.488589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.488722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.488891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.488919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.489069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.489184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.489211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.489342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.489460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.489485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.489587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.489717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.489741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.489864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.489999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.490027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.042 qpair failed and we were unable to recover it. 00:20:42.042 [2024-04-18 17:07:57.490146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.042 [2024-04-18 17:07:57.490317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.490346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.490488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.490619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.490643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.490764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.490867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.490891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.491002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.491121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.491146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.491249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.491377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.491407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.491537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.491636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.491661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.491785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.491893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.491917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.492029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.492159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.492184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.492294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.492438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.492463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.492601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.492702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.492727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.492837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.492969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.492994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.493118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.493250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.493285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.493393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.493501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.493525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.493631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.493767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.493791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.493927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.494042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.494066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.494170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.494285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.494310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.494437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.494544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.494569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.494693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.494797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.494822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.494992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.495104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.495128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.495233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.495372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.495402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.495504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.495620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.495645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.495789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.495897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.495922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.496056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.496172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.496201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.496378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.496526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.496550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.496651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.496775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.496800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.496905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.497059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.497086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.497247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.497364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.497397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.497510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.497620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.497645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.497762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.497913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.497938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.498042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.498172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.498197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.498301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.498437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.498463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.498570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.498670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.498695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.498861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.498963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.498988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.499126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.499285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.499310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.499425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.499540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.499565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.499672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.499778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.499802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.499960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.500057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.500083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.500184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.500316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.500341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.043 [2024-04-18 17:07:57.500451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.500557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.043 [2024-04-18 17:07:57.500581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.043 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.500690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.500822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.500847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.501002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.501131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.501156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.501263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.501361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.501390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.501504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.501604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.501629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.501736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.501840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.501866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.501974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.502110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.502135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.502269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.502365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.502397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.502534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.502666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.502690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.502810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.502936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.502960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.503084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.503198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.503227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.503404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.503522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.503546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.503679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.503787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.503812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.503944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.504050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.504075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.504214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.504344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.504368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.504489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.504601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.504625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.504792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.504896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.504920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.505049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.505218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.505259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.505409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.505516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.505541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.505659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.505794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.505820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.505953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.506055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.506080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.506213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.506344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.506369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.506487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.506591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.506616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.506728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.506862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.506887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.506991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.507145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.507170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.507293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.507419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.507445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.507563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.507665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.507690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.507820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.507947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.507972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.508106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.508257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.508285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.508445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.508559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.508584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.508727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.508857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.508882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.509011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.509123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.509147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.509257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.509362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.509392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.509498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.509609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.509633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.509748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.509842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.509867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.510025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.510153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.510178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.510336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.510489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.510514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.510645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.510779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.510805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.510907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.511021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.511046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.044 qpair failed and we were unable to recover it. 00:20:42.044 [2024-04-18 17:07:57.511177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.044 [2024-04-18 17:07:57.511304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.511329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.511429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.511569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.511593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.511699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.511809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.511833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.511960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.512066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.512092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.512227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.512353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.512378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.512496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.512608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.512633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.512746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.512875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.512900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.513005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.513139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.513163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.513315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.513477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.513502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.513609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.513747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.513772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.513880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.513990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.514016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.514121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.514227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.514251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.514355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.514491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.514516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.514648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.514757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.514781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.514924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.515026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.515050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.515177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.515287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.515311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.515444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.515581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.515606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.515742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.515849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.515873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.515983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.516111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.516135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.516266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.516372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.516404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.516511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.516619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.516645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.516779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.516879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.516905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.517062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.517190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.517214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.517315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.517429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.517454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.517590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.517725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.517750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.517885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.518017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.518042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.518150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.518249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.518274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.518418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.518532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.518557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.518658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.518789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.518814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.518926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.519066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.519091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.519220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.519378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.519409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.519542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.519641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.519666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.519778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.519907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.519931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.045 [2024-04-18 17:07:57.520033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.520142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.045 [2024-04-18 17:07:57.520168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.045 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.520299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.520415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.520440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.520570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.520680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.520704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.520808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.520909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.520934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.521032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.521193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.521218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.521357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.521487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.521513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.521623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.521733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.521763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.521900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.522055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.522080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.522178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.522316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.522340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.522481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.522638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.522663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.522760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.522861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.522886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.523021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.523154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.523178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.523278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.523390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.523415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.523570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.523729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.523753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.523917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.524078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.524102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.524231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.524366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.524400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.524527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.524652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.524680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.524812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.524942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.524967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.525102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.525201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.525226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.525330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.525442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.525468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.525613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.525719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.525744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.525878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.525991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.526016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.526119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.526220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.526244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.526406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.526510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.526534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.526646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.526773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.526797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.526907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.527035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.527060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.527169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.527289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.527318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.527450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.527552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.527577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.527723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.527845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.527870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.527971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.528127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.528151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.528262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.528394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.528429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.528532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.528666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.528690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.528798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.528906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.528930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.529032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.529165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.529189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.529347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.529458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.529483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.529586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.529687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.529711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.529816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.529971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.530000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.530132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.530296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.530321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.530422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.530526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.046 [2024-04-18 17:07:57.530551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.046 qpair failed and we were unable to recover it. 00:20:42.046 [2024-04-18 17:07:57.530682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.530826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.530851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.530975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.531108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.531133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.531257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.531360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.531390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.531525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.531661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.531685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.531810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.531964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.531989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.532094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.532202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.532226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.532358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.532518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.532543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.532648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.532752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.532778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.532899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.533036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.533062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.533223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.533350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.533374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.533482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.533580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.533606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.533751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.533910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.533934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.534062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.534198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.534223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.534328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.534441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.534466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.534576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.534680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.534704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.534801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.534953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.534978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.535104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.535228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.535252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.535416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.535556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.535581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.535685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.535822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.535847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.535981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.536111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.536136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.536264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.536390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.536415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.536549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.536709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.536734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.536871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.537000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.537025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.537163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.537266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.537290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.537429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.537540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.537565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.537668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.537773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.537798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.537906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.538036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.538061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.538167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.538304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.538329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.538471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.538598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.538622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.538739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.538863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.538888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.539051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.539157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.539181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.539312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.539418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.539445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.539607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.539740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.539764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.539898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.540028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.540053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.540186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.540311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.540336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.540436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.540567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.540593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.540728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.540864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.540889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.047 qpair failed and we were unable to recover it. 00:20:42.047 [2024-04-18 17:07:57.541046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.541177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.047 [2024-04-18 17:07:57.541202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.541313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.541477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.541502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.541608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.541739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.541764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.541867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.542001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.542026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.542158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.542314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.542339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.542477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.542582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.542607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.542775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.542881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.542906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.543004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.543132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.543156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.543260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.543391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.543416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.543580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.543739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.543764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.543923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.544028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.544054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.544195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.544330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.544355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.544584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.544717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.544742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.544879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.545031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.545056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.545161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.545256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.545281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.545414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.545521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.545546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.545642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.545738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.545763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.545876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.546029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.546053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.546217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.546340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.546364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.546489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.546617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.546642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.546799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.546898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.546924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.547065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.547228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.547253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.547358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.547549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.547576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.547705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.547860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.547884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.548009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.548149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.548176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.548319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.548459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.548487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.548605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.548733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.548758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.548936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.549075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.549102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.549220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.549335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.549362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.549529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.549633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.549658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.549786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.549902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.549930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.550082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.550198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.550226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.550348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.550485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.550510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.550695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.550834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.550863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.048 qpair failed and we were unable to recover it. 00:20:42.048 [2024-04-18 17:07:57.551037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.048 [2024-04-18 17:07:57.551163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.551187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.551295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.551400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.551426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.551556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.551662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.551687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.551787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.551965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.551991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.552145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.552247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.552273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.552464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.552588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.552612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.552733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.552874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.552901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.553053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.553155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.553180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.553312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.553418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.553444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.553580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.553713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.553738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.553875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.553980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.554007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.554130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.554293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.554318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.554489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.554599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.554624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.554732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.554836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.554861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.554962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.555091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.555117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.555250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.555351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.555375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.555489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.555622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.555648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.555811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.555942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.555967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.556100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.556282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.556309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.556436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.556570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.556595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.556703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.556879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.556906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.557053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.557190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.557218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.557366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.557482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.557507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.557637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.557807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.557834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.557995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.558141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.558169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.558318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.558456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.558481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.558636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.558769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.558793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.558986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.559110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.559139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.559290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.559421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.559446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.559621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.559723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.559747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.559886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.559994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.560019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.560122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.560274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.560298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.560405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.560539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.560564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.560704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.560832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.560856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.561000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.561134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.561175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.561316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.561488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.561515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.561637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.561780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.561807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.561963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.562075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.562101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.049 [2024-04-18 17:07:57.562206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.562364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.049 [2024-04-18 17:07:57.562398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.049 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.562528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.562633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.562660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.562817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.562952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.562977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.563105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.563237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.563261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.563450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.563607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.563632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.563807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.563934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.563959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.564104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.564206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.564231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.564369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.564514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.564539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.564696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.564828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.564855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.564986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.565108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.565151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.565258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.565392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.565417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.565529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.565665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.565689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.565795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.565952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.565979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.566123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.566268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.566295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.566452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.566585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.566610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.566738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.566909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.566937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.567088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.567200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.567227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.567387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.567524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.567549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.567683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.567827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.567854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.567990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.568132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.568164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.568299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.568434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.568459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.568599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.568758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.568786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.568921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.569069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.569097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.569266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.569414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.569439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.569576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.569764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.569793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.569951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.570096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.570124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.570278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.570423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.570449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.570622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.570792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.570816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.570972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.571097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.571122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.571288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.571423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.571453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.571590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.571728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.571753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.571883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.572042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.572066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.572224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.572356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.572406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.572549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.572676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.572702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.572835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.572992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.573016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.573119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.573255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.573280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.573430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.573534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.573560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.573672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.573836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.573860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.573997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.574103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.574128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.050 qpair failed and we were unable to recover it. 00:20:42.050 [2024-04-18 17:07:57.574240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.050 [2024-04-18 17:07:57.574370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.574405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.574543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.574689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.574717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.574890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.574996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.575021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.575164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.575310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.575337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.575506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.575645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.575673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.575826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.575932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.575957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.576082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.576185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.576212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.576353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.576511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.576536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.576663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.576761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.576787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.576946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.577100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.577124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.577247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.577376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.577407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.577513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.577640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.577665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.577776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.577880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.577906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.578083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.578236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.578262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.578418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.578550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.578592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.578709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.578885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.578912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.579055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.579194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.579221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.579375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.579539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.579580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.579697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.579842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.579867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.579975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.580078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.580103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.580248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.580398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.580424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.580541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.580720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.580748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.580895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.581037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.581064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.581183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.581321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.581345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.581517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.581649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.581674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.581860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.581973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.582000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.582121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.582257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.582282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.582379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.582515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.582540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.582675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.582778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.582804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.582971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.583102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.583127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.583297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.583447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.583476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.583624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.583742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.583771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.583901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.584005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.584030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.584184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.584300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.584327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.584523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.584657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.584682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.584808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.584914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.584938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.585043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.585170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.585194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.585293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.585439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.585468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.051 qpair failed and we were unable to recover it. 00:20:42.051 [2024-04-18 17:07:57.585593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.585742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.051 [2024-04-18 17:07:57.585767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.585900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.586027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.586054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.586227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.586363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.586392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.586508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.586643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.586669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.586807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.586936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.586961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.587061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.587186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.587211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.587353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.587515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.587540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.587692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.587834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.587861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.587977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.588096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.588123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.588302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.588440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.588483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.588629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.588775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.588802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.588957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.589098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.589122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.589258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.589396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.589422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.589616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.589768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.589796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.589906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.590051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.590078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.590254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.590388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.590413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.590555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.590656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.590680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.590804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.590943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.590970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.591117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.591243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.591268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.591406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.591551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.591580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.591723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.591870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.591900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.592046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.592155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.592179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.592308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.592437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.592465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.592620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.592755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.592783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.592912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.593038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.593062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.593201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.593345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.593372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.593523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.593714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.593754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.593925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.594054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.594081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.594194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.594295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.594320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.594435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.594567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.594594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.594708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.594865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.594891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.595037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.595154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.595181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.595296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.595457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.595482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.595597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.595753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.052 [2024-04-18 17:07:57.595777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.052 qpair failed and we were unable to recover it. 00:20:42.052 [2024-04-18 17:07:57.595931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.596046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.596073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.596194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.596312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.596352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.596462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.596600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.596625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.596780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.596932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.596957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.597092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.597303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.597331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.597515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.597646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.597688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.597825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.597972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.597999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.598175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.598305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.598329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.598503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.598640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.598665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.598821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.598955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.598982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.599121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.599273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.599298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.599415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.599522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.599546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.599700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.599839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.599867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.599973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.600150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.600175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.600280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.600409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.600434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.600624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.600785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.600809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.600939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.601060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.601085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.601245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.601344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.601369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.601546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.601712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.601736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.601868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.601974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.601999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.602106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.602260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.602284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.602433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.602555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.602583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.602722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.602844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.602871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.603022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.603178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.603203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.603335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.603443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.603468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.603583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.603733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.603761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.603941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.604070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.604110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.604250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.604390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.604433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.604535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.604645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.604685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.604842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.604969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.604993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.605146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.605308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.605332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.605435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.605570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.605594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.605694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.605826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.605852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.606048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.606143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.606168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.606301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.606412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.606438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.606545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.606712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.606736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.606885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.607055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.607082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.607212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.607347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.053 [2024-04-18 17:07:57.607372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.053 qpair failed and we were unable to recover it. 00:20:42.053 [2024-04-18 17:07:57.607518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.607676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.607716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.607868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.608036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.608060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.608189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.608297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.608323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.608424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.608557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.608582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.608722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.608860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.608885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.608989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.609117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.609141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.609242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.609369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.609399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.609528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.609682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.609709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.609898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.610028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.610052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.610174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.610359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.610392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.610523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.610652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.610694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.610814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.610943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.610971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.611115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.611246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.611271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.611435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.611555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.611583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.611753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.611903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.611927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.612053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.612211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.612235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.612363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.612513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.612540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.612679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.612865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.612909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.613103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.613242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.613266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.613407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.613569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.613597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.613740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.613921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.613945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.614052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.614156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.614185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.614338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.614508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.614536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.614689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.614814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.614838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.614971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.615102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.615127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.615301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.615434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.615460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.615615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.615756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.615783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.615958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.616111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.616152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.616334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.616490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.616515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.616616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.616720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.616745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.616876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.616974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.616999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.617105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.617240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.617270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.617450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.617624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.617649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.617753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.617887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.617912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.618075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.618208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.618233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.618376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.618541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.618574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.618675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.618808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.618832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.619009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.619161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.619186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.054 qpair failed and we were unable to recover it. 00:20:42.054 [2024-04-18 17:07:57.619345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.619489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.054 [2024-04-18 17:07:57.619518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.619640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.619746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.619772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.619921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.620094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.620122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.620233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.620379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.620414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.620552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.620677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.620702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.620828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.621000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.621028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.621197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.621312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.621340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.621501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.621637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.621662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.621791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.621938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.621966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.622104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.622231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.622256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.622393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.622504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.622528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.622638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.622732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.622757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.622893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.623069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.623096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.623242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.623367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.623408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.623568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.623742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.623770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.623916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.624056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.624083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.624208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.624306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.624330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.624463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.624571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.624596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.624703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.624823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.624850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.624979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.625106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.625130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.625239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.625372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.625402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.625576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.625735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.625759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.625921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.626057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.626083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.626244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.626347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.626372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.626511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.626665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.626689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.626834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.627010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.627038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.627148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.627268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.627297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.627494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.627649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.627691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.627811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.627912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.627938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.628055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.628218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.628242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.628372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.628519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.628545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.628679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.628812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.628836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.628957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.629103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.629131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.629305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.629419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.629446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.055 qpair failed and we were unable to recover it. 00:20:42.055 [2024-04-18 17:07:57.629580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.055 [2024-04-18 17:07:57.629680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.629704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.629886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.630030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.630057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.630203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.630354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.630379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.630493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.630629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.630654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.630774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.630958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.630982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.631115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.631226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.631250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.631396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.631527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.631551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.631713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.631854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.631881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.632047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.632206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.632230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.632358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.632515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.632542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.632677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.632777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.632802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.632963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.633143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.633170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.633304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.633440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.633465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.633625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.633782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.633809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.633952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.634066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.634094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.634217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.634349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.634373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.634526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.634706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.634731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.634836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.634940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.634964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.635119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.635248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.635289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.635446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.635580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.635605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.635713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.635869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.635898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.636044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.636177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.636203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.636334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.636463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.636491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.636610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.636754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.636781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.636937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.637061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.637086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.637269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.637396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.637425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.637598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.637735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.637762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.637916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.638022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.638047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.638171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.638298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.638325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.638482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.638613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.638637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.638798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.638928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.638952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.639136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.639278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.639305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.639421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.639565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.639593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.639726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.639829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.639855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.639959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.640074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.640102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.640240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.640355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.640388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.640528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.640658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.640682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.640861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.641034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.641061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.056 qpair failed and we were unable to recover it. 00:20:42.056 [2024-04-18 17:07:57.641202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.641346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.056 [2024-04-18 17:07:57.641374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.641508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.641613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.641638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.641750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.641881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.641905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.642059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.642201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.642228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.642363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.642476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.642502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.642611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.642763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.642791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.642915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.643061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.643089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.643265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.643403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.643447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.643593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.643736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.643763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.643944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.644082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.644110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.644289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.644391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.644416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.644585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.644717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.644744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.644862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.645009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.645033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.645162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.645292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.645316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.645407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.645537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.645561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.645664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.645797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.645821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.645952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.646081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.646106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.646207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.646391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.646437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.646551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.646686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.646710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.646841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.646936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.646961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.647092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.647188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.647212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.647344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.647552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.647578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.647690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.647829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.647855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.648006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.648140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.648166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.648302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.648441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.648471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.648616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.648749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.648790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.648933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.649093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.649118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.649248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.649346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.649372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.649545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.649676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.649719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.649869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.650016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.650041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.650171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.650298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.650322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.650440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.650543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.650569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.650736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.650898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.650922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.651054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.651168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.651195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.651343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.651483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.651508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.651660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.651772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.651813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.651919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.652053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.652077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.652228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.652372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.652407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.652552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.652676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.652704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.057 qpair failed and we were unable to recover it. 00:20:42.057 [2024-04-18 17:07:57.652845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.057 [2024-04-18 17:07:57.653028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.653052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.653188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.653290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.653315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.653509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.653648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.653672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.653805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.653935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.653964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.654096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.654232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.654257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.654417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.654584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.654611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.654718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.654859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.654887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.655013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.655114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.655138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.655263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.655410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.655439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.655601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.655737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.655762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.655894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.655998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.656022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.656169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.656291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.656318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.656465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.656637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.656665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.656812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.656927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.656952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.657084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.657209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.657233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.657336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.657470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.657494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.657592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.657702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.657727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.657878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.658020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.658048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.658169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.658307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.658333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.658492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.658594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.658617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.658773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.658910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.658936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.659095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.659239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.659267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.659412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.659540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.659564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.659711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.659828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.659853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.659971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.660071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.660096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.660202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.660338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.660363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.660475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.660579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.660605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.660737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.660858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.660885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.661014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.661144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.661169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.661335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.661467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.661492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.661604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.661733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.661762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.661898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.662030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.662054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.662191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.662292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.662317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.662419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.662531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.058 [2024-04-18 17:07:57.662561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.058 qpair failed and we were unable to recover it. 00:20:42.058 [2024-04-18 17:07:57.662661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.662762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.662787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.662885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.662987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.663012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.663123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.663228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.663269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.663438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.663574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.663599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.663726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.663894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.663922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.664043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.664226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.664251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.664353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.664524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.664550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.664692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.664833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.664860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.664981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.665089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.665116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.665265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.665376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.665410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.665546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.665662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.665690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.665804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.665917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.665944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.666095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.666196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.666220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.666398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.666506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.666530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.666664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.666773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.666799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.666930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.667058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.667083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.667189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.667319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.667347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.667487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.667623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.667650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.667774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.667905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.667929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.668058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.668227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.668259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.668372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.668506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.668530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.668659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.668792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.668817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.668954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.669093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.669120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.669238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.669387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.669415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.669570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.669729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.669754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.669921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.670027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.670051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.670197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.670345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.670372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.670561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.670664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.670688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.670816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.670964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.670991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.671137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.671281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.671313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.671452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.671563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.671588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.671690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.671818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.671843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.671947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.672092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.672120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.672297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.672438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.672463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.672575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.672718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.672747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.672869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.672987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.673015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.673144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.673267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.673292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.673477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.673592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.673619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.059 [2024-04-18 17:07:57.673736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.673885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.059 [2024-04-18 17:07:57.673910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.059 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.674011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.674148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.674172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.674308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.674438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.674463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.674573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.674692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.674719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.674843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.674973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.674998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.675128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.675249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.675276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.675442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.675550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.675575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.675684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.675785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.675809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.675911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.676011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.676036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.676152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.676293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.676320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.676480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.676596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.676620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.676723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.676853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.676897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.677017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.677130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.677157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.677282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.677434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.677459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.677592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.677720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.677747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.677891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.678010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.678037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.678181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.678286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.678310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.678444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.678564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.678591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.678737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.678914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.678939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.679043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.679173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.679199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.679325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.679431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.679460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.679581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.679762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.679787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.679921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.680025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.680050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.680200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.680321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.680350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.680471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.680592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.680621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.680764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.680894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.680919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.681048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.681161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.681189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.681360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.681512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.681539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.681669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.681802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.681827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.681974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.682116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.682143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.682253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.682360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.682393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.682530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.682636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.682660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.682821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.682942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.682970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.683100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.683205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.683233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.683351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.683468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.683493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.683613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.683736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.683763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.683910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.684034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.684058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.684159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.684261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.684285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.684392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.684521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.684545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.060 qpair failed and we were unable to recover it. 00:20:42.060 [2024-04-18 17:07:57.684675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.060 [2024-04-18 17:07:57.684815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.684842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.684977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.685086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.685110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.685213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.685364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.685399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.685521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.685634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.685662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.685787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.685889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.685915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.686024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.686151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.686176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.686278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.686393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.686418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.686571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.686700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.686741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.686884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.687020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.687047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.687168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.687322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.687347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.687488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.687650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.687675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.687780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.687936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.687963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.688087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.688201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.688229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.688360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.688483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.688508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.688645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.688766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.688794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.688908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.689060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.689085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.689218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.689326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.689353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.689515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.689627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.689652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.689820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.689926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.689952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.690061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.690165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.690189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.690294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.690449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.690478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.690614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.690764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.690788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.690897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.690996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.691021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.691136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.691249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.691276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.691429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.691553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.691580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.691746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.691852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.691877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.692025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.692137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.692165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.692283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.692413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.692441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.692567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.692670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.692695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.692797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.692897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.692922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.693033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.693166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.693190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.693287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.693395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.693420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.693567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.693715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.693742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.693888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.694023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.694056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.694187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.694306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.694332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.694466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.694600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.694625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.694752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.694860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.694886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.694994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.695123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.695149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.061 qpair failed and we were unable to recover it. 00:20:42.061 [2024-04-18 17:07:57.695280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.695392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.061 [2024-04-18 17:07:57.695418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.695539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.695673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.695700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.695809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.695940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.695967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.696102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.696219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.696248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.696410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.696518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.696544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.696678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.696828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.696853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.696988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.697143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.697171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.697293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.697464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.697492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.697625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.697781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.697806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.697942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.698095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.698123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.698275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.698405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.698435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.698570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.698678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.698703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.698825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.698979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.699005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.699150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.699287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.699316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.699479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.699587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.699614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.699744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.699858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.699883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.700018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.700129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.700154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.700260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.700377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.700410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.700578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.700707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.700733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.700879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.700993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.701022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.701160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.701288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.701313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.701462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.701582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.701611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.701720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.701862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.701889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.702013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.702148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.702174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.702330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.702455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.702484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.702596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.702757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.702782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.702922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.703025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.703051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.703175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.703318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.703346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.703495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.703596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.703621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.703760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.703868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.703894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.704051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.704208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.704234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.704394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.704529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.704553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.704659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.704763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.704788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.704890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.705061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.705088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.705234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.705367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.705400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.705530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.705629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.705653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.062 qpair failed and we were unable to recover it. 00:20:42.062 [2024-04-18 17:07:57.705760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.705904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.062 [2024-04-18 17:07:57.705928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.706034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.706174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.706201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.706360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.706498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.706523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.706653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.706766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.706794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.706922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.707037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.707066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.707214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.707394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.707438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.707551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.707696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.707722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.707866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.707981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.708008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.708177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.708288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.708313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.708448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.708625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.708658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.708791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.708929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.708954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.709056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.709162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.709187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.709347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.709512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.709539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.709653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.709777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.709806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.709964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.710070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.710094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.710214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.710319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.710344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.710505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.710620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.710663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.710799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.710907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.710932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.711031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.711161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.711203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.711350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.711486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.711515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.711678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.711813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.711852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.711969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.712082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.712109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.712236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.712364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.712399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.712542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.712677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.712703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.712830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.712951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.712978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.713107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.713221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.713249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.713405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.713513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.713537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.713641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.713766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.713794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.713944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.714084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.714112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.063 qpair failed and we were unable to recover it. 00:20:42.063 [2024-04-18 17:07:57.714240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.063 [2024-04-18 17:07:57.714344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.714373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.714513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.714632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.714659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.714776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.714889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.714915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.715046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.715176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.715200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.715323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.715452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.715486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.715677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.715843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.715873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.716038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.716154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.716179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.716292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.716444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.716473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.716628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.716778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.716807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.716953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.717072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.717098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.717211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.717320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.717349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.717546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.717697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.717727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.717878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.717992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.718018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.718163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.718327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.718353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.718505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.718641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.718672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.718812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.718927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.718955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.719074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.719227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.719258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.365 qpair failed and we were unable to recover it. 00:20:42.365 [2024-04-18 17:07:57.719421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.365 [2024-04-18 17:07:57.719559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.719589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.719731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.719835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.719866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.720005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.720177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.720207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.720317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.720480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.720510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.720669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.720838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.720884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.720999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.721117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.721146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.721301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.721420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.721455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.721590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.721705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.721733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.721873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.722005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.722037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.722187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.722321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.722351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.722526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.722635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.722664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.722768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.722880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.722911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.723047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.723199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.723228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.723365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.723530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.723560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.723683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.723802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.723833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.723978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.724088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.724118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.724260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.724405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.724450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.724580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.724736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.724764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.724891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.725038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.725072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.725240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.725354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.725392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.725537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.725664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.725708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.725882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.726049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.726076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.366 qpair failed and we were unable to recover it. 00:20:42.366 [2024-04-18 17:07:57.726199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.366 [2024-04-18 17:07:57.726315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.726346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.726544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.726691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.726720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.726851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.727001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.727030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.727170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.727308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.727339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.727525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.727639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.727667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.727809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.727930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.727964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.728129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.728292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.728339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.728488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.728635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.728665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.728785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.728968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.728997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.729128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.729252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.729281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.729422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.729552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.729583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.729769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.729911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.729943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.730070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.730207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.730236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.730409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.730574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.730604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.730719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.730869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.730902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.731043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.731182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.731211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.731329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.731449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.731479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.731595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.731710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.731740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.731899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.732013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.732056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.732192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.732339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.732368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.732526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.732670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.732699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.732817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.732962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.732989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.733132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.733286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.733318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.733494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.733605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.733636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.733754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.733866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.733893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.734049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.734216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.734242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.734370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.734531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.734563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.734704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.734822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.734850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.367 qpair failed and we were unable to recover it. 00:20:42.367 [2024-04-18 17:07:57.734993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.367 [2024-04-18 17:07:57.735159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.735190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.735350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.735498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.735529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.735674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.735807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.735838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.735975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.736097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.736126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.736284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.736439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.736469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.736600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.736724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.736754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.736921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.737041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.737068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.737193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.737319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.737348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.737476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.737587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.737612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.737774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.737881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.737908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.738035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.738168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.738193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.738364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.738497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.738522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.738624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.738752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.738779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.738900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.739040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.739068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.739202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.739358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.739408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.739520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.739653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.739678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.739811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.739965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.740020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.740142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.740303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.740328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.740481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.740580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.740605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.740728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.740917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.740944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.741074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.741179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.741204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.741350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.741466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.741491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.741598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.741747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.741775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.741929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.742038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.742062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.742163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.742314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.368 [2024-04-18 17:07:57.742342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.368 qpair failed and we were unable to recover it. 00:20:42.368 [2024-04-18 17:07:57.742499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.742608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.742633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.742753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.742854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.742878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.743030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.743142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.743169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.743286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.743449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.743474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.743610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.743744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.743768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.743894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.744026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.744055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.744204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.744307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.744331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.744446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.744658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.744683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.744815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.744932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.744959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.745080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.745203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.745231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.745388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.745500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.745524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.745661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.745804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.745832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.745944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.746059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.746089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.746303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.746448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.746477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.746656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.746759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.746784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.746950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.747072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.747099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.747253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.747378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.747410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.747539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.747659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.747684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.747796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.747924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.747949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.748080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.748185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.748212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.748346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.748478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.748508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.748622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.748803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.748850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.749004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.749137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.749162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.749317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.749433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.749461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.369 [2024-04-18 17:07:57.749579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.749718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.369 [2024-04-18 17:07:57.749746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.369 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.749875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.750032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.750057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.750204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.750318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.750346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.750509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.750656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.750683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.750815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.750918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.750943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.751171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.751310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.751337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.751493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.751655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.751679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.751818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.751943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.751967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.752122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.752238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.752265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.752405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.752561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.752586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.752723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.752831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.752855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.753014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.753157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.753181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.753355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.753492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.753517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.753626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.753790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.753814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.753967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.754106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.754133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.754293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.754433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.754467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.754625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.754761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.754786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.754950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.755079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.755108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.755243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.755344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.755370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.755511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.755608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.755633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.755775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.755926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.755954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.756065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.756235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.370 [2024-04-18 17:07:57.756262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.370 qpair failed and we were unable to recover it. 00:20:42.370 [2024-04-18 17:07:57.756440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.756571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.756614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.756790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.756935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.756962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.757093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.757196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.757221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.757317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.757422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.757452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.757562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.757666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.757708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.757827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.757964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.757991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.758137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.758266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.758291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.758478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.758663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.758690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.758831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.758977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.759006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.759149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.759254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.759279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.759441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.759624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.759651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.759799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.759945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.759973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.760104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.760259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.760301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.760454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.760612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.760640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.760767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.760902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.760927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.761057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.761206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.761234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.761357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.761544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.761569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.761704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.761868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.761892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.762026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.762168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.762193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.762366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.762525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.762550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.762705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.762858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.762882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.762979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.763101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.763126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.763237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.763447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.763473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.763599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.763701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.763731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.763895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.764027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.764069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.764217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.764349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.764376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.764499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.764606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.764633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.764786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.764922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.764946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.371 qpair failed and we were unable to recover it. 00:20:42.371 [2024-04-18 17:07:57.765053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.371 [2024-04-18 17:07:57.765180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.765221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.765370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.765523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.765550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.765701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.765808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.765832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.765962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.766060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.766085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.766214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.766350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.766377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.766530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.766665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.766689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.766829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.767013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.767037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.767183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.767342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.767370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.767500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.767627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.767651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.767774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.767892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.767919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.768072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.768209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.768233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.768395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.768549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.768574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.768694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.768833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.768863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.769003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.769149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.769178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.769352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.769460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.769502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.769651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.769764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.769791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.769941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.770088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.770115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.770293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.770405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.770431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.770532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.770665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.770691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.770838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.770946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.770972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.771086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.771221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.771246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.771417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.771573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.771598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.771736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.771837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.771861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.771975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.772104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.772129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.772259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.772369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.772401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.772545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.772699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.772727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.772858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.772989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.773014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.773133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.773286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.773313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.372 qpair failed and we were unable to recover it. 00:20:42.372 [2024-04-18 17:07:57.773431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.372 [2024-04-18 17:07:57.773550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.773592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.773733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.773858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.773883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.774012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.774132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.774160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.774309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.774431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.774459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.774608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.774729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.774754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.774860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.775011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.775038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.775159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.775292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.775317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.775494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.775604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.775644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.775804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.775972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.776000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.776142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.776279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.776307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.776440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.776544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.776569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.776701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.776854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.776882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.777027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.777169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.777197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.777378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.777487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.777527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.777678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.777835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.777860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.778009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.778151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.778179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.778311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.778437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.778463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.778590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.778733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.778762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.778947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.779073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.779097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.779194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.779324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.779349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.779487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.779591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.779617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.779758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.779968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.780029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.780147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.780277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.780302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.780487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.780612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.780637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.780786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.780944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.780968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.781107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.781268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.781310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.781464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.781612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.781639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.781791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.781919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.781944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.373 qpair failed and we were unable to recover it. 00:20:42.373 [2024-04-18 17:07:57.782084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.373 [2024-04-18 17:07:57.782242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.782267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.782395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.782570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.782598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.782757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.782904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.782931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.783109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.783244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.783269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.783412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.783581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.783608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.783734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.783904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.783931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.784054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.784177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.784202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.784321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.784492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.784520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.784649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.784793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.784817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.784945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.785074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.785098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.785215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.785369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.785402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.785544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.785720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.785744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.785878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.786038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.786080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.786239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.786378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.786419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.786578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.786746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.786773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.786924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.787083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.787108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.787275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.787419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.787445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.787615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.787732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.787760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.787910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.788041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.788065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.788172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.788276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.788317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.788485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.788643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.788668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.788842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.788972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.788997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.789154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.789295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.789323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.789480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.789639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.789663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.789795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.789953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.789978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.790138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.790253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.790279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.790426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.790569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.790596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.790724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.790882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.790907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.374 qpair failed and we were unable to recover it. 00:20:42.374 [2024-04-18 17:07:57.791066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.374 [2024-04-18 17:07:57.791190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.791218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.791396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.791516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.791546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.791735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.791849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.791892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.792034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.792216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.792240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.792373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.792483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.792507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.792617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.792782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.792806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.792934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.793114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.793142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.793257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.793405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.793433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.793588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.793715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.793739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.793895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.794043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.794071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.794210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.794370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.794399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.794532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.794686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.794711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.794895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.795028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.795053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.795151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.795307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.795331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.795469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.795568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.795594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.795727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.795906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.795933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.796095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.796199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.796224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.796373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.796498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.796523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.796621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.796761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.796788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.796934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.797078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.797105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.797265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.797400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.797425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.797557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.797692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.797716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.797843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.797967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.797995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.375 qpair failed and we were unable to recover it. 00:20:42.375 [2024-04-18 17:07:57.798175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.375 [2024-04-18 17:07:57.798309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.798335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.798505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.798658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.798682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.798811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.798968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.798994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.799139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.799264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.799288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.799422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.799611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.799635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.799739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.799859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.799899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.800031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.800158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.800184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.800343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.800512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.800540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.800677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.800808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.800834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.800962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.801070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.801095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.801202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.801362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.801393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.801521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.801703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.801728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.801886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.801986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.802010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.802161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.802337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.802364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.802528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.802654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.802679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.802781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.802915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.802940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.803092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.803227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.803255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.803403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.803521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.803549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.803709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.803834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.803858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.803962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.804113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.804145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.804295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.804435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.804463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.804618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.804729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.804756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.804854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.804958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.804983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.805113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.805269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.805293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.805427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.805559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.805584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.805778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.805936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.805960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.806091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.806217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.806244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.806372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.806509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.806535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.806710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.806881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.376 [2024-04-18 17:07:57.806908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.376 qpair failed and we were unable to recover it. 00:20:42.376 [2024-04-18 17:07:57.807050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.807165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.807200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.807332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.807471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.807497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.807624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.807817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.807876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.808009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.808142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.808167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.808326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.808432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.808457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.808564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.808708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.808735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.808850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.808997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.809022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.809174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.809304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.809344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.809521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.809646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.809671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.809830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.809978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.810006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.810136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.810265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.810294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.810443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.810551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.810576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.810685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.810847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.810875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.811026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.811128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.811153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.811304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.811430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.811456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.811612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.811768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.811794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.811956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.812094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.812118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.812254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.812353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.812378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.812530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.812648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.812675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.812804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.812972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.812996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.813152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.813296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.813329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.813487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.813617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.813642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.813773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.813905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.813930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.814057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.814210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.814237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.814377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.814524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.814552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.814707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.814807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.814832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.814961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.815066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.815093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.815194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.815357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.815388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.377 qpair failed and we were unable to recover it. 00:20:42.377 [2024-04-18 17:07:57.815518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.377 [2024-04-18 17:07:57.815642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.815667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.815827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.815978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.816006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.816178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.816313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.816338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.816471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.816582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.816608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.816742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.816902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.816929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.817064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.817176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.817204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.817326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.817503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.817530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.817705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.817863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.817889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.817989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.818140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.818169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.818322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.818456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.818483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.818614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.818819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.818889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.819034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.819212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.819240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.819357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.819515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.819541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.819727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.819884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.819909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.820039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.820177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.820204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.820325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.820427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.820453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.820568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.820725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.820750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.820895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.821037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.821064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.821220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.821346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.821370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.821504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.821619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.821647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.821783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.821901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.821928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.822077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.822206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.822231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.822331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.822467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.822492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.822670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.822819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.822846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.823014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.823142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.823166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.823343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.823499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.823525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.823625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.823757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.823782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.823943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.824073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.824114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.824285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.824427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.824454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.378 qpair failed and we were unable to recover it. 00:20:42.378 [2024-04-18 17:07:57.824594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.378 [2024-04-18 17:07:57.824750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.824777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.824934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.825069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.825094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.825259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.825396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.825424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.825582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.825714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.825739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.825855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.825988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.826012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.826149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.826318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.826346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.826474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.826649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.826677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.826858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.826993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.827017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.827142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.827262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.827287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.827396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.827529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.827554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.827659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.827789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.827814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.828000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.828160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.828200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.828341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.828536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.828562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.828672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.828781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.828806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.828916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.829034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.829058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.829220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.829368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.829402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.829553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.829652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.829678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.829831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.829943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.829970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.830148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.830254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.830279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.830419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.830554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.830579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.830779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.830935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.830960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.831138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.831312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.831340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.379 qpair failed and we were unable to recover it. 00:20:42.379 [2024-04-18 17:07:57.831472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.831595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.379 [2024-04-18 17:07:57.831620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.831768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.831912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.831936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.832088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.832263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.832288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.832426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.832585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.832628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.832774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.832908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.832935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.833081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.833192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.833219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.833372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.833514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.833539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.833675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.833815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.833842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.833964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.834119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.834146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.834301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.834433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.834459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.834612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.834734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.834761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.834873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.835020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.835045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.835182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.835293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.835317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.835423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.835530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.835555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.835656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.835813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.835838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.835943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.836049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.836074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.836195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.836373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.836407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.836558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.836703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.836730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.836850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.837004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.837028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.837194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.837325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.837350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.837484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.837613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.837640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.837799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.837902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.837927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.838084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.838198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.838226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.838364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.838550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.838578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.838708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.838818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.838843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.838977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.839108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.839133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.380 qpair failed and we were unable to recover it. 00:20:42.380 [2024-04-18 17:07:57.839236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.839340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.380 [2024-04-18 17:07:57.839364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.839471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.839636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.839661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.839792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.839946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.839973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.840123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.840240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.840267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.840419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.840563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.840589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.840721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.840867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.840894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.841012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.841162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.841190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.841341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.841480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.841505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.841606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.841757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.841786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.841907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.842078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.842105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.842234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.842370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.842399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.842530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.842682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.842707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.842839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.842975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.843003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.843149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.843276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.843300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.843471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.843632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.843657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.843846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.844035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.844088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.844214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.844373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.844412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.844605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.844770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.844811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.844923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.845038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.845065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.845239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.845394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.845436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.845571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.845713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.845740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.845915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.846047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.846073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.846225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.846349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.846376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.846504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.846633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.846658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.846769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.846869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.846893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.846996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.847092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.847117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.847250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.847393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.847421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.847549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.847662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.847689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.847843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.847996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.381 [2024-04-18 17:07:57.848021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.381 qpair failed and we were unable to recover it. 00:20:42.381 [2024-04-18 17:07:57.848176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.848340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.848367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.848529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.848687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.848711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.848821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.848932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.848958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.849115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.849304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.849329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.849464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.849599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.849624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.849727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.849861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.849885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.850015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.850140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.850167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.850309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.850482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.850514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.850662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.850790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.850815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.850943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.851086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.851114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.851261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.851379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.851412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.851544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.851651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.851676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.851854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.851993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.852021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.852138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.852283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.852311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.852498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.852591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.852616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.852718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.852864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.852894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.853010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.853155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.853182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.853310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.853471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.853501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.853692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.853802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.853829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.854009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.854157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.854182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.854341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.854523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.854551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.854692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.854835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.854862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.855000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.855111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.855138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.855262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.855390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.855415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.855565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.855676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.855703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.855827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.855974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.855999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.856152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.856249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.856275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.856424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.856594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.856627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.382 qpair failed and we were unable to recover it. 00:20:42.382 [2024-04-18 17:07:57.856818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.382 [2024-04-18 17:07:57.856977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.857002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.857158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.857283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.857311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.857463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.857625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.857650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.857800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.857952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.857976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.858104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.858262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.858303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.858446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.858588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.858615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.858721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.858837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.858864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.858981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.859088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.859112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.859266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.859446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.859472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.859581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.859708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.859740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.859900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.860033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.860059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.860223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.860362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.860408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.860518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.860656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.860681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.860843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.860953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.860978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.861161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.861270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.861297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.861447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.861572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.861600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.861728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.861883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.861908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.862063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.862196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.862223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.862335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.862514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.862542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.862695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.862826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.862850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.863014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.863160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.863187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.863305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.863424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.863452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.863601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.863729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.863769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.863908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.864052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.864079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.864191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.864304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.864332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.864475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.864604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.864629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.864729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.864928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.864952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.865086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.865212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.865237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.865366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.865504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.383 [2024-04-18 17:07:57.865529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.383 qpair failed and we were unable to recover it. 00:20:42.383 [2024-04-18 17:07:57.865681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.865815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.865842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.865965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.866089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.866113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.866270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.866435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.866460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.866622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.866755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.866784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.866940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.867084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.867114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.867258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.867392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.867417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.867554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.867663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.867690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.867809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.867926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.867953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.868108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.868237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.868261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.868422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.868558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.868585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.868725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.868843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.868870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.869024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.869131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.869156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.869312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.869454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.869479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.869578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.869740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.869765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.869925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.870029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.870053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.870189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.870304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.870330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.870448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.870592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.870619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.870754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.870917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.870941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.871070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.871193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.871217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.871348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.871511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.871538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.871691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.871847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.871871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.872007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.872144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.872171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.872293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.872403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.872431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.384 [2024-04-18 17:07:57.872585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.872745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.384 [2024-04-18 17:07:57.872769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.384 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.872927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.873107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.873132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.873288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.873391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.873416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.873547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.873706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.873731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.873903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.874039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.874064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.874225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.874377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.874412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.874572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.874669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.874693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.874850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.875002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.875030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.875180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.875350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.875378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.875538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.875692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.875716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.875849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.876004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.876032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.876158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.876322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.876348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.876481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.876616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.876640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.876800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.876976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.877003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.877142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.877311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.877339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.877521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.877656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.877698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.877866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.878039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.878067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.878208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.878348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.878375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.878533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.878639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.878665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.878857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.878975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.879005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.879156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.879287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.879311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.879461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.879593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.879618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.879776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.879895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.879923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.880039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.880184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.880208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.880334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.880499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.880543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.880685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.880838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.880862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.881021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.881149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.881177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.881333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.881452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.881479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.881620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.881787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.881813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.385 [2024-04-18 17:07:57.881949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.882116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.385 [2024-04-18 17:07:57.882141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.385 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.882272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.882408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.882434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.882551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.882732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.882757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.882888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.883021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.883046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.883169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.883299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.883324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.883432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.883577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.883604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.883754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.883895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.883922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.884099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.884205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.884229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.884434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.884589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.884614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.884752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.884871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.884898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.885051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.885181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.885205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.885308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.885413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.885454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.885562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.885671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.885698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.885829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.885959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.885984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.886129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.886243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.886270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.886422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.886538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.886567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.886699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.886835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.886860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.886966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.887068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.887094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.887234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.887360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.887396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.887552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.887694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.887719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.887861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.887978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.888005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.888119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.888263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.888290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.888467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.888574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.888598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.888733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.888850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.888877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.888994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.889104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.889131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.889276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.889388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.889413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.889568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.889690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.889732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.889840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.889949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.889993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.890177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.890286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.386 [2024-04-18 17:07:57.890312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.386 qpair failed and we were unable to recover it. 00:20:42.386 [2024-04-18 17:07:57.890441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.890571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.890596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.890744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.890907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.890937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.891093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.891249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.891274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.891406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.891565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.891591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.891697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.891850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.891879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.892035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.892147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.892172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.892325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.892484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.892511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.892618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.892777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.892802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.892934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.893034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.893058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.893163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.893264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.893289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.893448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.893573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.893600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.893758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.893884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.893909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.894064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.894205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.894232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.894341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.894487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.894515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.894672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.894777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.894801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.894901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.895050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.895077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.895253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.895411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.895436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.895583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.895680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.895705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.895803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.895994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.896021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.896163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.896338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.896363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.896530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.896636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.896676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.896825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.897008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.897032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.897196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.897378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.897412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.897564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.897666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.897691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.897854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.897999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.898026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.898173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.898292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.898319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.898477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.898613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.898637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.898769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.898920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.898948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.899080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.899240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.387 [2024-04-18 17:07:57.899265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.387 qpair failed and we were unable to recover it. 00:20:42.387 [2024-04-18 17:07:57.899370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.899509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.899533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.899693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.899819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.899848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.899999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.900144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.900171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.900342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.900533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.900559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.900684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.900801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.900829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.900949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.901092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.901120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.901270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.901397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.901422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.901551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.901723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.901748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.901880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.902055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.902079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.902233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.902330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.902354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.902516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.902627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.902653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.902788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.902911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.902943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.903130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.903257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.903297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.903419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.903564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.903592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.903705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.903821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.903848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.903998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.904152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.904177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.904310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.904443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.904468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.904592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.904734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.904761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.904890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.905024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.905048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.905178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.905305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.905332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.905506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.905673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.905700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.905871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.905984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.906028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.906147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.906281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.906308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.906427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.906549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.906575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.906710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.906838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.906862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.906991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.907159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.907186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.907340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.907503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.907528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.907627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.907755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.907779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.388 qpair failed and we were unable to recover it. 00:20:42.388 [2024-04-18 17:07:57.907907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.908047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.388 [2024-04-18 17:07:57.908074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.908230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.908451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.908502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.908630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.908762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.908786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.908945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.909056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.909088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.909219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.909352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.909377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.909494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.909629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.909654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.909781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.909972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.909997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.910155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.910309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.910338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.910504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.910640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.910664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.910864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.910966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.910991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.911138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.911280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.911307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.911460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.911572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.911597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.911698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.911804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.911829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.911925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.912051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.912075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.912212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.912322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.912346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.912487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.912625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.912649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.912776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.912914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.912939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.913073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.913176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.913201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.913359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.913535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.913563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.913708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.913842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.913869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.914020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.914154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.914180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.914307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.914458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.914486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.914654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.914800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.914828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.914959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.915122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.915146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.915306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.915493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.389 [2024-04-18 17:07:57.915518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.389 qpair failed and we were unable to recover it. 00:20:42.389 [2024-04-18 17:07:57.915621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.915753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.915779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.915884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.916012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.916038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.916178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.916283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.916307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.916407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.916536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.916560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.916693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.916822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.916847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.917010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.917128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.917157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.917276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.917426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.917468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.917600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.917730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.917756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.917861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.917972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.917996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.918130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.918295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.918320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.918432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.918587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.918611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.918772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.918886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.918914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.919067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.919211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.919241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.919404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.919537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.919562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.919714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.919852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.919879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.920054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.920224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.920252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.920401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.920527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.920551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.920650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.920798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.920826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.920996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.921103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.921132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.921287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.921391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.921417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.921567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.921740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.921767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.921935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.922067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.922093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.922254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.922391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.922416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.922519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.922697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.922724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.922899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.923048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.923072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.923206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.923307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.923333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.923517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.923632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.923657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.923849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.924004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.924029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.390 qpair failed and we were unable to recover it. 00:20:42.390 [2024-04-18 17:07:57.924202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.924335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.390 [2024-04-18 17:07:57.924359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.924578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.924715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.924740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.924872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.925004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.925029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.925128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.925230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.925255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.925363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.925463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.925487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.925614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.925737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.925765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.925897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.926004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.926028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.926151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.926308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.926333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.926464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.926615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.926643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.926785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.926907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.926931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.927042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.927167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.927195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.927371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.927487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.927512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.927615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.927719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.927744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.927843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.927973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.927997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.928156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.928279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.928307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.928448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.928554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.928579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.928687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.928789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.928814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.928933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.929079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.929104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.929242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.929374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.929404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.929562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.929678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.929705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.929850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.929958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.929985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.930164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.930342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.930370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.930548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.930713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.930738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.930891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.931066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.931111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.931272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.931403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.931429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.931589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.931789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.931833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.932011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.932138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.932165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.932307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.932426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.932456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.932627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.932768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.932811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.391 qpair failed and we were unable to recover it. 00:20:42.391 [2024-04-18 17:07:57.932967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.933086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.391 [2024-04-18 17:07:57.933112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.933246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.933399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.933425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.933576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.933698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.933727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.933893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.934053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.934078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.934189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.934321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.934345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.934457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.934580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.934608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.934784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.934900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.934927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.935093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.935237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.935264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.935397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.935527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.935551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.935676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.935820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.935847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.936004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.936168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.936195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.936353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.936459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.936484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.936589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.936754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.936781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.936901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.937041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.937068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.937214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.937393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.937421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.937570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.937698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.937725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.937881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.938042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.938069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.938212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.938392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.938420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.938566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.938699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.938724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.938852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.938970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.938996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.939136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.939309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.939335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.939486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.939618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.939642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.939776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.939889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.939925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.940071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.940200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.940239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.940407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.940563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.940588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.940698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.940797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.940822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.940974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.941088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.941115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.941261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.941400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.941442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.941549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.941713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.941738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.392 [2024-04-18 17:07:57.941888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.942060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.392 [2024-04-18 17:07:57.942087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.392 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.942216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.942369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.942402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.942551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.942685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.942709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.942852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.943081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.943108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.943290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.943443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.943468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.943596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.943766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.943793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.944063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.944233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.944261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.944399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.944554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.944579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.944723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.944840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.944867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.944996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.945151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.945178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.945315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.945453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.945479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.945583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.945713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.945755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.945887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.946024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.946051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.946220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.946340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.946367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.946533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.946638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.946678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.946825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.946953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.946977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.947149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.947312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.947339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.947476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.947584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.947610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.947716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.947867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.947894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.948043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.948168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.948192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.948326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.948468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.948493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.948624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.948756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.948797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.948918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.949062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.949089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.949232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.949403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.949429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.949544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.949648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.949673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.393 qpair failed and we were unable to recover it. 00:20:42.393 [2024-04-18 17:07:57.949776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.393 [2024-04-18 17:07:57.949934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.949961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.950081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.950203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.950230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.950354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.950463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.950487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.950596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.950719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.950744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.950873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.951067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.951091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.951223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.951385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.951410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.951585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.951688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.951712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.951863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.951983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.952010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.952157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.952280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.952307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.952467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.952580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.952605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.952736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.952866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.952893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.953063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.953163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.953187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.953319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.953458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.953485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.953604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.953802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.953827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.953960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.954104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.954128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.954281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.954436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.954461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.954572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.954732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.954756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.954896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.955007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.955030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.955133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.955238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.955261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.955368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.955478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.955507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.955623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.955721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.955745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.955876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.955984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.956009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.956132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.956257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.956284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.956414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.956547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.956571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.956725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.956826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.956850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.956959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.957104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.957132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.957276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.957427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.957469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.957627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.957763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.957788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.957924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.958075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.958102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.394 qpair failed and we were unable to recover it. 00:20:42.394 [2024-04-18 17:07:57.958261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.958370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.394 [2024-04-18 17:07:57.958422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.958560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.958697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.958725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.958841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.959007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.959034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.959167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.959270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.959294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.959453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.959597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.959621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.959772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.959914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.959941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.960069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.960195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.960222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.960346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.960484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.960508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.960651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.960761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.960786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.960896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.961025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.961049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.961186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.961350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.961374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.961548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.961674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.961701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.961857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.961963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.961987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.962090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.962248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.962275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.962427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.962569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.962596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.962772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.962873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.962897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.963017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.963138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.963165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.963282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.963463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.963490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.963648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.963753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.963777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.963878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.963978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.964002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.964174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.964293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.964320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.964454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.964566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.964592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.964755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.964873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.964901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.965020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.965190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.965217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.965358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.965478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.965503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.965611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.965723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.965748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.965884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.965986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.966011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.966139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.966266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.966293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.966407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.966553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.966578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.395 [2024-04-18 17:07:57.966704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.966809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.395 [2024-04-18 17:07:57.966836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.395 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.966968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.967073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.967098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.967196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.967324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.967349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.967488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.967585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.967610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.967763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.967893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.967918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.968022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.968142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.968170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.968323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.968455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.968480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.968590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.968693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.968717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.968859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.968974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.969001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.969121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.969259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.969286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.969435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.969558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.969582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.969709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.969823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.969852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.969966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.970083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.970111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.970240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.970367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.970399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.970552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.970696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.970723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.970872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.971010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.971037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.971185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.971283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.971308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.971484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.971595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.971622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.971735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.971843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.971867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.972029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.972129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.972154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.972286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.972410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.972455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.972608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.972711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.972736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.972841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.973000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.973029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.973159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.973300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.973326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.973479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.973592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.973616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.973750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.973881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.973905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.974003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.974106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.974130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.974262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.974393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.974420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.974548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.974674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.974698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.974880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.975028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.396 [2024-04-18 17:07:57.975055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.396 qpair failed and we were unable to recover it. 00:20:42.396 [2024-04-18 17:07:57.975200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.975318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.975344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.975482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.975585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.975609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.975730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.975872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.975898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.976017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.976160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.976187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.976332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.976433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.976458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.976573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.976720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.976746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.976900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.977010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.977034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.977163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.977269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.977293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.977412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.977594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.977622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.977760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.977904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.977928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.978063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.978169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.978192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.978314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.978433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.978460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.978576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.978736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.978759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.978903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.979007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.979030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.979134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.979256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.979296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.979415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.979523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.979549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.979673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.979807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.979830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.979956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.980079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.980105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.980227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.980386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.980411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.980525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.980629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.980653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.980779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.980919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.980946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.981106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.981259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.981286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.981430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.981538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.981563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.981744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.981879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.981911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.982077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.982254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.982286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.982415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.982534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.982564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.982678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.982794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.982820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.982979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.983129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.983164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.983300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.983419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.983451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.397 qpair failed and we were unable to recover it. 00:20:42.397 [2024-04-18 17:07:57.983595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.397 [2024-04-18 17:07:57.983802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.983864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.984017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.984169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.984201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.984343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.984464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.984491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.984624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.984832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.984880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.985041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.985210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.985234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.985429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.985568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.985592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.985745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.985900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.985924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.986057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.986215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.986240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.986334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.986466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.986490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.986627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.986805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.986854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.987032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.987205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.987231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.987365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.987502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.987526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.987670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.987842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.987869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.987988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.988132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.988159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.988312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.988426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.988455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.988559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.988715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.988754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.988883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.989010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.989036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.989160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.989286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.989310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.989460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.989624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.989648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.989781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.989914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.989938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.990069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.990196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.990220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.990377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.990559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.990583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.990711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.990892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.990917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.991074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.991179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.991203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.398 qpair failed and we were unable to recover it. 00:20:42.398 [2024-04-18 17:07:57.991334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.991454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.398 [2024-04-18 17:07:57.991487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.991597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.991736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.991763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.991921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.992028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.992052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.992207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.992344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.992368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.992531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.992679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.992707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.992859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.992957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.992982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.993116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.993276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.993304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.993430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.993556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.993584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.993739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.993846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.993870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.994051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.994171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.994198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.994337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.994457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.994484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.994612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.994734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.994758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.994882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.995010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.995036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.995179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.995288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.995315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.995468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.995572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.995596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.995758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.995862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.995886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.996018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.996121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.996162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.996323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.996425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.996450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.996583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.996765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.996792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.996917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.997060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.997087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.997262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.997408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.997434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.997568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.997724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.997749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.997881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.997981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.998005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.998136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.998236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.998260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.998395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.998526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.998550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.998709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.998854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.998881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.999013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.999141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.999165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.999270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.999402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.999430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.999548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.999691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:57.999717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:57.999878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:58.000008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:58.000032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:58.000166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:58.000306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:58.000332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.399 [2024-04-18 17:07:58.000486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:58.000595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.399 [2024-04-18 17:07:58.000620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.399 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.000742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.000845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.000870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.000996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.001106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.001133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.001249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.001413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.001438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.001548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.001653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.001677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.001807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.001941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.001966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.002069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.002174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.002198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.002330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.002435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.002460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.002589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.002731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.002780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.002949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.003066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.003093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.003225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.003334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.003358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.003501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.003637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.003661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.003794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.003896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.003938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.004094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.004229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.004255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.004371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.004492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.004519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.004691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.004805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.004832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.004963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.005091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.005116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.005244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.005401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.005429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.005550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.005669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.005697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.005876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.006050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.006077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.006221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.006353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.006391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.006534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.006692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.006719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.006846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.006951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.006975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.007075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.007228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.007252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.007387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.007536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.007563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.007709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.007820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.007844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.007955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.008128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.008154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.008300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.008432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.008457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.008569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.008744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.008768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.008950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.009080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.009104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.009238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.009371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.009401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.009509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.009639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.009663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.400 qpair failed and we were unable to recover it. 00:20:42.400 [2024-04-18 17:07:58.009769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.009867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.400 [2024-04-18 17:07:58.009893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.010049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.010207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.010234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.010369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.010491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.010515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.010649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.010775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.010799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.010943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.011087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.011114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.011234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.011379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.011408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.011541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.011658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.011682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.011834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.012005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.012032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.012178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.012280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.012304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.012466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.012591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.012618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.012737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.012915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.012939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.013068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.013203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.013228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.013409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.013528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.013556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.013680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.013833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.013857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.013987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.014123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.014147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.014283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.014389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.014414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.014539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.014690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.014718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.014865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.014997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.015021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.015167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.015310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.015336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.015488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.015593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.015617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.015731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.015862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.015887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.015995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.016125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.016150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.016304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.016436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.016466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.016594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.016696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.016720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.016875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.017044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.017071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.017181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.017313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.017340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.017524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.017632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.017657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.017823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.018003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.018030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.018160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.018297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.018324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.018450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.018612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.018636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.018769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.018907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.018934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.019083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.019221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.019247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.401 qpair failed and we were unable to recover it. 00:20:42.401 [2024-04-18 17:07:58.019386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.401 [2024-04-18 17:07:58.019496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.019522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.019652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.019779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.019807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.019983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.020090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.020114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.020242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.020390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.020415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.020518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.020621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.020645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.020784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.020905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.020929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.021064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.021166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.021190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.021342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.021515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.021548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.021662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.021776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.021803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.021954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.022111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.022136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.022289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.022477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.022502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.022608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.022740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.022764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.022863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.022971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.022995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.023170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.023279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.023303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.023407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.023514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.023538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.023669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.023797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.023822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.023991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.024115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.024142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.024288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.024407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.024435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.024567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.024676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.024700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.024839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.024972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.024999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.025113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.025228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.025255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.025395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.025522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.025547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.025672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.025823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.025850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.025965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.026081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.026105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.026217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.026324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.026348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.026518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.026629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.026655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.026801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.026985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.027010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.027122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.027255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.027280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.402 qpair failed and we were unable to recover it. 00:20:42.402 [2024-04-18 17:07:58.027444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.402 [2024-04-18 17:07:58.027592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.027619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.027761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.027879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.027906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.028039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.028176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.028201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.028356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.028498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.028526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.028643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.028787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.028814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.028969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.029072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.029096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.029240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.029400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.029429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.029588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.029703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.029730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.029884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.029991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.030016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.030130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.030237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.030262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.030398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.030510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.030535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.030664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.030807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.030848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.030967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.031125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.031149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.031257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.031392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.031417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.031581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.031737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.031764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.031912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.032053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.032081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.032225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.032368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.032400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.032520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.032626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.032650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.032775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.032968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.032992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.033122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.033251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.033277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.033434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.033543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.033567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.033718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.033837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.033865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.033993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.034138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.034165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.034295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.034454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.034479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.034605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.034712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.034735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.034871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.034989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.035013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.035144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.035247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.035271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.035451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.035565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.035591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.035723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.035865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.035892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.036027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.036191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.036215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.036367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.036531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.036559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.036693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.036839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.036866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.037010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.037115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.037139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.037268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.037408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.037434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.403 qpair failed and we were unable to recover it. 00:20:42.403 [2024-04-18 17:07:58.037625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.403 [2024-04-18 17:07:58.037758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.037783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.037947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.038104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.038143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.038281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.038445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.038470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.038574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.038711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.038738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.038859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.038966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.038991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.039094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.039201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.039224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.039333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.039488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.039521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.039657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.039814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.039839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.039983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.040106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.040130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.040309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.040421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.040461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.040590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.040719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.040743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.040854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.040956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.040980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.041080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.041210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.041234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.041363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.041470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.041494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.041599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.041707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.041749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.041894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.042038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.042065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.042203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.042332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.042355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.042507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.042654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.042680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.042812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.042923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.042947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.043057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.043194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.043217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.043343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.043461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.043488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.043633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.043773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.043800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.043930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.044064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.044089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.044198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.044297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.044322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.044468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.044611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.044638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.044761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.044865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.044888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.044995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.045133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.045157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.045285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.045418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.045447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.045601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.045731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.045755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.045895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.046022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.046046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.046167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.046300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.046325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.046463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.046598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.046621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.046781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.046902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.046929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.404 [2024-04-18 17:07:58.047074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.047179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.404 [2024-04-18 17:07:58.047205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.404 qpair failed and we were unable to recover it. 00:20:42.405 [2024-04-18 17:07:58.047325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.047426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.047451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.405 qpair failed and we were unable to recover it. 00:20:42.405 [2024-04-18 17:07:58.047566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.047667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.047692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.405 qpair failed and we were unable to recover it. 00:20:42.405 [2024-04-18 17:07:58.047792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.047922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.047946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.405 qpair failed and we were unable to recover it. 00:20:42.405 [2024-04-18 17:07:58.048069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.048173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.048198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.405 qpair failed and we were unable to recover it. 00:20:42.405 [2024-04-18 17:07:58.048318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.048482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.048508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.405 qpair failed and we were unable to recover it. 00:20:42.405 [2024-04-18 17:07:58.048617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.048733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.048757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.405 qpair failed and we were unable to recover it. 00:20:42.405 [2024-04-18 17:07:58.048890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.049029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.049052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.405 qpair failed and we were unable to recover it. 00:20:42.405 [2024-04-18 17:07:58.049184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.049310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.049334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.405 qpair failed and we were unable to recover it. 00:20:42.405 [2024-04-18 17:07:58.049474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.049615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.405 [2024-04-18 17:07:58.049642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.405 qpair failed and we were unable to recover it. 00:20:42.405 [2024-04-18 17:07:58.049796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.049926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.049966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.685 qpair failed and we were unable to recover it. 00:20:42.685 [2024-04-18 17:07:58.050107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.050258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.050283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.685 qpair failed and we were unable to recover it. 00:20:42.685 [2024-04-18 17:07:58.050437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.050563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.050589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.685 qpair failed and we were unable to recover it. 00:20:42.685 [2024-04-18 17:07:58.050722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.050828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.050852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.685 qpair failed and we were unable to recover it. 00:20:42.685 [2024-04-18 17:07:58.050955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.051088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.051112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.685 qpair failed and we were unable to recover it. 00:20:42.685 [2024-04-18 17:07:58.051261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.051427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.051454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.685 qpair failed and we were unable to recover it. 00:20:42.685 [2024-04-18 17:07:58.051588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.051721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.685 [2024-04-18 17:07:58.051745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.685 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.051872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.052006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.052029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.052139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.052245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.052268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.052366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.052502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.052526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.052632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.052738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.052761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.052896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.053034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.053060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.053188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.053296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.053320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.053455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.053580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.053608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.053760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.053892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.053921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.054076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.054232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.054257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.054392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.054522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.054548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.054664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.054782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.054806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.054907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.055039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.055064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.055223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.055363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.055392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.055535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.055659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.055685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.055837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.055943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.055968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.056081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.056223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.056249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.056389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.056503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.056529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.056658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.056785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.056809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.056988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.057149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.057173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.057279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.057405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.057430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.057534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.057641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.057665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.057801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.057971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.057998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.058147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.058262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.058288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.058402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.058515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.058539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.058649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.058782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.058805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.058938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.059066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.059092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.059301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.059439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.059464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.059596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.059731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.059758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.686 [2024-04-18 17:07:58.059883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.060035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.686 [2024-04-18 17:07:58.060061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.686 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.060209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.060319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.060342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.060507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.060626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.060653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.060770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.060912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.060938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.061064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.061185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.061209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.061321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.061469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.061496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.061611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.061723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.061749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.061877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.062012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.062036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.062139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.062243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.062267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.062376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.062535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.062560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.062713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.062864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.062892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.063052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.063226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.063269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.063409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.063518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.063544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.063651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.063758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.063783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.063952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.064089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.064114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.064222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.064356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.064393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.064543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.064658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.064684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.064834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.064934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.064959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.065100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.065206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.065232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.065366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.065485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.065510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.065662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.065836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.065881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.066014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.066148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.066173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.066301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.066457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.066482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.066616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.066743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.066786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.066897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.067068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.067093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.067225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.067360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.067393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.067580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.067732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.067777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.067929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.068054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.068080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.068241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.068438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.068467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.687 qpair failed and we were unable to recover it. 00:20:42.687 [2024-04-18 17:07:58.068664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.068831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.687 [2024-04-18 17:07:58.068873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.069014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.069154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.069179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.069312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.069415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.069442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.069602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.069772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.069816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.069976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.070133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.070158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.070264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.070396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.070422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.070605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.070774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.070801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.070954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.071084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.071109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.071221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.071354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.071379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.071545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.071725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.071750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.071903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.072054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.072080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.072212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.072316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.072340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.072479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.072655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.072682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.072825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.072943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.072967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.073094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.073219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.073243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.073412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.073595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.073638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.073784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.073908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.073933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.074103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.074240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.074264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.074376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.074511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.074554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.074709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.074880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.074923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b20000b90 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.075050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.075181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.075206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.075348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.075529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.075562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.075717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.075860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.075888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.076060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.076193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.076220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.076362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.076525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.076550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.076733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.076886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.076926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.077193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.077344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.077368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.077486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.077617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.077642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.077782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.077924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.077951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.688 qpair failed and we were unable to recover it. 00:20:42.688 [2024-04-18 17:07:58.078162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.078295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.688 [2024-04-18 17:07:58.078322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.078490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.078621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.078645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.078779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.078909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.078941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.079093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.079291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.079318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.079436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.079578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.079603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.079735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.079860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.079887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.080067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.080235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.080263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.080406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.080582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.080606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.080756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.080923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.080950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.081125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.081306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.081333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.081469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.081580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.081604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.081732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.081865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.081889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.082075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.082247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.082273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.082463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.082591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.082615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.082768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.082880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.082907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.083120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.083269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.083296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.083456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.083589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.083614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.083793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.083912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.083940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.084102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.084244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.084268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.084465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.084624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.084649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.084779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.084902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.084928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.085067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.085180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.085207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.085386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.085533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.085557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.085679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.085788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.085814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.085987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.086128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.086155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.086300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.086458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.086484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.086620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.086743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.086768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.086937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.087082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.087109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.087251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.087375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.087409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.689 [2024-04-18 17:07:58.087533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.087668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.689 [2024-04-18 17:07:58.087691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.689 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.087823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.087926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.087949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.088119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.088283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.088307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.088451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.088580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.088605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.088737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.088901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.088926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.089117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.089289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.089316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.089474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.089579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.089603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.089716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.089820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.089844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.089975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.090181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.090208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.090350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.090506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.090530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.090627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.090734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.090758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.090881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.091055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.091081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.091220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.091400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.091442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.091576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.091690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.091716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.091831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.092010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.092037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.092192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.092349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.092372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.092511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.092639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.092663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.092847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.093016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.093043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.093159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.093298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.093325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.093481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.093614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.093638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.093799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.093908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.093949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.094082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.094231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.094257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.094423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.094578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.094602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.094756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.094896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.094922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.690 qpair failed and we were unable to recover it. 00:20:42.690 [2024-04-18 17:07:58.095035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.690 [2024-04-18 17:07:58.095175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.095206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.095362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.095527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.095551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.095657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.095787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.095811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.095943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.096074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.096100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.096261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.096392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.096417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.096553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.096696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.096723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.096845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.096985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.097011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.097173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.097330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.097354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.097554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.097726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.097753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.097919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.098098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.098125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.098271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.098379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.098409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.098571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.098696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.098723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.098863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.099008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.099036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.099179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.099338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.099362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.099552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.099714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.099746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.099923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.100092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.100120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.100252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.100394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.100421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.100605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.100747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.100774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.100893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.101069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.101097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.101221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.101353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.101378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.101497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.101609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.101650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.101829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.101986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.102027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.102174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.102302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.102325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.102461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.102595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.102618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.102769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.102951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.102974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.103101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.103230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.103254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.103446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.103581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.103606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.103789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.103918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.103942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.104134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.104229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.104253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.691 qpair failed and we were unable to recover it. 00:20:42.691 [2024-04-18 17:07:58.104425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.691 [2024-04-18 17:07:58.104583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.104608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.104790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.104905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.104932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.105096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.105257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.105281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.105432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.105550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.105576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.105720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.105900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.105927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.106078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.106205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.106229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.106395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.106553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.106579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.106738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.106874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.106900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.107050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.107182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.107207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.107357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.107532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.107557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.107686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.107856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.107882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.108010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.108168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.108191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.108359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.108529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.108558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.108671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.108810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.108837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.109009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.109137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.109177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.109360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.109505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.109530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.109665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.109851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.109876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.110032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.110163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.110186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.110344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.110514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.110542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.110687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.110836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.110864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.111038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.111163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.111187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.111361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.111522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.111549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.111674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.111802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.111830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.111962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.112093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.112116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.112278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.112439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.112464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.112616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.112753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.112780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.112936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.113072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.113095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.113234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.113403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.113427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.113535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.113661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.692 [2024-04-18 17:07:58.113685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.692 qpair failed and we were unable to recover it. 00:20:42.692 [2024-04-18 17:07:58.113815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.113918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.113942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.114073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.114219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.114245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.114351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.114507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.114535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.114691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.114822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.114851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.115008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.115151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.115177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.115321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.115501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.115529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.115670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.115770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.115793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.115924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.116031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.116055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.116169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.116319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.116346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.116524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.116629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.116653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.116813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.116958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.116985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.117129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.117270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.117298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.117482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.117594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.117618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.117770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.117872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.117897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.117999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.118126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.118151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.118286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.118415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.118440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.118598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.118739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.118763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.118894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.119020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.119044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.119198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.119366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.119397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.119523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.119696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.119721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.119850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.119946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.119971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.120131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.120258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.120282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.120391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.120518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.120543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.120693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.120875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.120900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.121037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.121170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.121194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.121366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.121506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.121531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.121707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.121825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.693 [2024-04-18 17:07:58.121852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.693 qpair failed and we were unable to recover it. 00:20:42.693 [2024-04-18 17:07:58.122010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.122166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.122190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.122321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.122442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.122470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.122613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.122758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.122785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.122970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.123097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.123139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.123282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.123453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.123481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.123624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.123769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.123797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.123946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.124113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.124154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.124273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.124465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.124491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.124646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.124756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.124780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.124894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.125027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.125051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.125171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.125327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.125352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.125491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.125587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.125612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.125738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.125870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.125895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.125997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.126123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.126148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.126276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.126406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.126431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.126535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.126639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.126664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.126768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.126900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.126927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.127070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.127192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.127219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.127375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.127510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.127534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.127711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.127854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.127880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.128036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.128191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.128215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.128389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.128517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.128542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.128670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.128853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.128877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.128977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.129101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.129125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.129260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.129377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.129411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.694 qpair failed and we were unable to recover it. 00:20:42.694 [2024-04-18 17:07:58.129565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.129722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.694 [2024-04-18 17:07:58.129762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.129933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.130077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.130103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.130224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.130369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.130407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.130549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.130703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.130727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.130884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.130988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.131012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.131142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.131300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.131327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.131485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.131591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.131615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.131789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.131907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.131933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.132088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.132221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.132245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.132371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.132547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.132575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.132768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.132972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.133023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.133164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.133303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.133330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.133497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.133609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.133633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.133769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.133904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.133927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.134108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.134278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.134306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.134493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.134629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.134653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.134757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.134885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.134909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.135046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.135235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.135259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.135415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.135602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.135629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.135771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.135903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.135929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.136082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.136215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.136253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.136405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.136521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.136548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.136739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.136925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.136978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.137147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.137292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.137320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.137489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.137600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.137625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.695 [2024-04-18 17:07:58.137746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.137912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.695 [2024-04-18 17:07:58.137936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.695 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.138042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.138144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.138168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.138354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.138520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.138547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.138710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.138836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.138860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.139015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.139155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.139182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.139334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.139480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.139508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.139667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.139773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.139798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.139956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.140134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.140160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.140286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.140405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.140433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.140618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.140840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.140888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.141032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.141173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.141199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.141375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.141487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.141511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.141619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.141758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.141782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.141963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.142080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.142110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.142279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.142416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.142446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.142596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.142704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.142728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.142880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.143049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.143075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.143245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.143393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.143421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.143578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.143748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.143772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.143936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.144045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.144069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.144200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.144324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.144348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.144489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.144593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.144617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.144723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.144853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.144876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.145007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.145162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.145203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.145363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.145476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.145501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.145630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.145734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.145759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.145886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.146015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.146039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.146200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.146358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.146389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.146495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.146622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.146650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.696 qpair failed and we were unable to recover it. 00:20:42.696 [2024-04-18 17:07:58.146786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.696 [2024-04-18 17:07:58.146943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.146970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.147112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.147234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.147261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.147415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.147554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.147578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.147683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.147787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.147811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.147992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.148140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.148166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.148318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.148472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.148496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.148601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.148731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.148755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.148903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.149070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.149098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.149222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.149344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.149368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.149533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.149717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.149743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.149900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.150000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.150024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.150214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.150407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.150435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.150548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.150670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.150696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.150841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.150981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.151007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.151160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.151265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.151289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.151448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.151581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.151605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.151734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.151882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.151909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.152077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.152223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.152251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.152430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.152579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.152604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.152707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.152864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.152891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.153037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.153179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.153206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.153364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.153485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.153513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.153632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.153739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.153763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.153889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.154026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.154053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.154195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.154318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.154345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.154484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.154591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.154616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.697 qpair failed and we were unable to recover it. 00:20:42.697 [2024-04-18 17:07:58.154742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.697 [2024-04-18 17:07:58.154898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.154938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.155057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.155174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.155201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.155316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.155459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.155485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.155626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.155757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.155781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.155917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.156050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.156073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.156232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.156368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.156401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.156549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.156668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.156694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.156848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.156989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.157016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.157174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.157303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.157327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.157468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.157642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.157668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.157903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.158019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.158046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.158163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.158282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.158308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.158494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.158666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.158694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.158808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.158923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.158950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.159090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.159241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.159267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.159435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.159599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.159655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.159804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.159931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.159955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.160073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.160240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.160267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.160411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.160611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.160659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.160807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.160975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.161002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.161179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.161304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.161327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.161518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.161668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.161692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.161797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.161960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.161985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.162140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.162300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.162328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.162449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.162589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.162617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.162727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.162859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.162883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.163023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.163209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.163235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.163390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.163546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.163573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.163721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.163856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.163881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.698 qpair failed and we were unable to recover it. 00:20:42.698 [2024-04-18 17:07:58.164013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.164143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.698 [2024-04-18 17:07:58.164168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.164341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.164478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.164503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.164689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.164833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.164859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.165016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.165143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.165166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.165291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.165431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.165458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.165627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.165795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.165826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.165973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.166087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.166114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.166284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.166428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.166467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.166637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.166766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.166790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.166921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.167053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.167077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.167230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.167330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.167355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.167488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.167586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.167609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.167714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.167860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.167886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.168014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.168154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.168180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.168351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.168515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.168541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.168639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.168770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.168794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.168953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.169061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.169087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.169230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.169366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.169402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.169523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.169664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.169690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.169814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.169970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.169994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.170126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.170239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.170265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.170402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.170573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.170599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.170739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.170892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.170917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.171020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.171122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.171146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.699 [2024-04-18 17:07:58.171300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.171436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.699 [2024-04-18 17:07:58.171463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.699 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.171666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.171798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.171822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.171992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.172123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.172163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.172264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.172393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.172418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.172532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.172649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.172676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.172888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.173090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.173113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.173291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.173448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.173514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.173669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.173791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.173814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.173983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.174101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.174127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.174266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.174387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.174414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.174540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.174684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.174711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.174862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.175021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.175063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.175205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.175359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.175404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.175546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.175719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.175746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.175882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.175990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.176017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.176168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.176304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.176328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.176492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.176615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.176642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.176746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.176887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.176914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.177093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.177225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.177249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.177386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.177499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.177523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.177650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.177751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.177777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.177886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.177994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.178021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.178162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.178328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.178356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.178529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.178637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.178661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.178789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.178927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.178966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.179068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.179224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.179248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.179356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.179495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.179519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.700 [2024-04-18 17:07:58.179650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.179749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.700 [2024-04-18 17:07:58.179773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.700 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.179892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.180048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.180072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.180171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.180308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.180332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.180443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.180572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.180596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.180735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.180827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.180851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.180965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.181148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.181181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.181335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.181467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.181492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.181626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.181775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.181802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.181978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.182106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.182130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.182265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.182377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.182411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.182549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.182673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.182701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.182842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.182954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.182981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.183126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.183252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.183276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.183438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.183590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.183618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.183772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.183906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.183929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.184077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.184215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.184240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.184376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.184521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.184561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.184703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.184846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.184872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.185026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.185131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.185155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.185255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.185363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.185396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.185508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.185607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.185630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.185756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.185888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.185914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.186028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.186149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.186176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.186346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.186480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.186504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.186605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.186730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.186754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.186885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.187029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.187053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.187170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.187300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.187324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.187424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.187530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.187569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.187715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.187818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.187841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.187970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.188116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.701 [2024-04-18 17:07:58.188142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.701 qpair failed and we were unable to recover it. 00:20:42.701 [2024-04-18 17:07:58.188257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.188405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.188432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.188604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.188706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.188730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.188861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.188994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.189018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.189175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.189280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.189304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.189412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.189550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.189574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.189705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.189839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.189863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.189977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.190100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.190124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.190282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.190391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.190415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.190556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.190677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.190716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.190824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.190956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.190979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.191135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.191304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.191332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.191494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.191594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.191619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.191752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.191876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.191903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.192072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.192216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.192242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.192393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.192529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.192553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.192686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.192813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.192840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.192993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.193107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.193134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.193252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.193374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.193406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.193537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.193693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.193732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.193884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.194014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.194039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.194155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.194271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.194297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.194477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.194613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.194637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.194802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.194910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.194935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.195042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.195170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.195194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.195299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.195434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.195459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.195597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.195716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.195743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.195883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.195982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.196010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.196185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.196299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.196325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.196465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.196602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.196628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.702 qpair failed and we were unable to recover it. 00:20:42.702 [2024-04-18 17:07:58.196737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.702 [2024-04-18 17:07:58.196869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.196893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.197001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.197107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.197131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.197287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.197410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.197438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.197556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.197666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.197693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.197838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.197980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.198007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.198132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.198266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.198289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.198429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.198538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.198565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.198716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.198815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.198839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.198952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.199077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.199101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.199265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.199412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.199454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.199558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.199676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.199703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.199845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.199991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.200018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.200164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.200293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.200317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.200428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.200559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.200584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.200712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.200831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.200858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.200978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.201122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.201148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.201259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.201391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.201418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.201555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.201684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.201707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.201834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.201975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.202001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.202112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.202256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.202281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.202455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.202578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.202602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.202707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.202815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.202839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.202989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.203139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.203162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.203268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.203371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.203402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.203579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.203720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.203746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.203876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.203990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.204014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.204124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.204224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.204248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.204391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.204521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.204548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.204662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.204809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.204835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.703 qpair failed and we were unable to recover it. 00:20:42.703 [2024-04-18 17:07:58.204968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.703 [2024-04-18 17:07:58.205073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.205096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.205241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.205389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.205416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.205568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.205677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.205701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.205832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.205935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.205959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.206061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.206159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.206183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.206286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.206390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.206414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.206528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.206638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.206680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.206796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.206904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.206930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.207095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.207206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.207230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.207332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.207461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.207490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.207603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.207734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.207758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.207862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.207983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.208009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.208137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.208303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.208329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.208463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.208574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.208597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.208720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.208836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.208863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.208973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.209101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.209127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.209255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.209374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.209423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.209545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.209674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.209701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.209846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.209960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.209987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.210114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.210228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.210258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.210418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.210553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.210577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.210707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.210851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.210877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.211006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.211137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.211161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.704 qpair failed and we were unable to recover it. 00:20:42.704 [2024-04-18 17:07:58.211265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.211367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.704 [2024-04-18 17:07:58.211399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.211511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.211637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.211661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.211815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.211963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.211990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.212153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.212291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.212315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.212450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.212599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.212625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.212782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.212883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.212907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.213006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.213153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.213180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.213302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.213453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.213481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.213639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.213745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.213769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.213890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.214003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.214027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.214156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.214321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.214348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.214472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.214585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.214612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.214737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.214880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.214907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.215032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.215134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.215158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.215257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.215387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.215414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.215531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.215650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.215677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.215794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.215931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.215958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.216090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.216192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.216217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.216339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.216481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.216509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.216645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.216769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.216793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.216950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.217069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.217096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.217238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.217362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.217394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.217534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.217646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.217674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.217792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.217937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.217963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.218099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.218216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.218243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.218378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.218505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.218530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.218630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.218731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.218756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.218856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.218982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.219023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.219155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.219272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.219299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.705 qpair failed and we were unable to recover it. 00:20:42.705 [2024-04-18 17:07:58.219427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.705 [2024-04-18 17:07:58.219541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.219565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.219663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.219797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.219822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.219951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.220076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.220105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.220237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.220394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.220420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.220583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.220689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.220713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.220823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.220926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.220950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.221056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.221178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.221205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.221350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.221505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.221530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.221628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.221731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.221756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.221935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.222102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.222129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.222270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.222408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.222436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.222574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.222749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.222776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.222922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.223027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.223051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.223173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.223283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.223310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.223455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.223560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.223587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.223704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.223814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.223842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.223969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.224062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.224086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.224213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.224429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.224459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.224581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.224695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.224728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.224870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.225011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.225038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.225178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.225280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.225304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.225455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.225600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.225627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.225746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.225865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.225892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.226009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.226124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.226151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.226274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.226407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.226432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.226566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.226671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.226695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.226804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.226910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.226935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.227059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.227207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.227232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.227368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.227523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.227554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.706 qpair failed and we were unable to recover it. 00:20:42.706 [2024-04-18 17:07:58.227705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.706 [2024-04-18 17:07:58.227856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.227879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.228008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.228174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.228197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.228307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.228430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.228473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.228596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.228694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.228718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.228873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.229018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.229044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.229162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.229297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.229323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.229485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.229593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.229617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.229720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.229822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.229846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.229994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.230174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.230198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.230315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.230466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.230493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.230668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.230781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.230809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.230938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.231067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.231092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.231249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.231392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.231434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.231568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.231674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.231697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.231810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.231928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.231952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.232088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.232215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.232239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.232394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.232513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.232540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.232706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.232842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.232866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.232999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.233145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.233173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.233321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.233478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.233503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.233613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.233744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.233769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.233880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.233985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.234011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.234144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.234244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.234269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.234370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.234509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.234534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.234644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.234798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.234825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.234973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.235104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.235129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.235308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.235453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.235477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.235583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.235699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.235723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.235854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.236022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.236049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.236192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.236309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.236337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.707 [2024-04-18 17:07:58.236481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.236590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.707 [2024-04-18 17:07:58.236614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.707 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.236724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.236855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.236879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.236983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.237089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.237113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.237236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.237343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.237370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.237534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.237641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.237665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.237776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.237881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.237905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.238039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.238174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.238201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.238316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.238455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.238483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.238664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.238782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.238807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.238947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.239072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.239096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.239224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.239337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.239363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.239524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.239665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.239692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.239815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.239979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.240004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.240164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.240261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.240285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.240396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.240539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.240563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.240685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.240830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.240857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.241014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.241151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.241174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.241323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.241427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.241451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.241574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.241693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.241722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.241838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.241984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.242011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.242147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.242257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.242289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.242409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.242535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.242559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.242706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.242823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.242850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.242992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.243132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.243159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.243298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.243456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.243481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.243579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.243693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.243718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.243837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.243972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.243999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.708 qpair failed and we were unable to recover it. 00:20:42.708 [2024-04-18 17:07:58.244139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.244283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.708 [2024-04-18 17:07:58.244309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.244461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.244588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.244612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.244725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.244864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.244889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.245027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.245146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.245173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.245297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.245400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.245436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.245553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.245734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.245758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.245891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.246025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.246049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.246175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.246367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.246399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.246531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.246624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.246648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.246778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.246893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.246920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.247075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.247179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.247203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.247350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.247527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.247555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.247672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.247791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.247815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.247920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.248051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.248077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.248246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.248390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.248415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.248546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.248667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.248693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.248830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.248942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.248969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.249077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.249193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.249220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.249357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.249467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.249491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.249609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.249722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.249748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.249918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.250032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.250058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.250177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.250287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.250314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.250417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.250560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.250583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.250704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.250811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.250837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.250991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.251096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.251122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.251267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.251431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.251456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.251559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.251665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.251688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.251823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.251976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.252001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.252174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.252281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.252305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.252434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.252578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.252604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.252783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.252890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.252913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.253053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.253187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.253211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.709 qpair failed and we were unable to recover it. 00:20:42.709 [2024-04-18 17:07:58.253327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.709 [2024-04-18 17:07:58.253482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.253510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.253633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.253756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.253782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.253935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.254073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.254097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.254255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.254395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.254432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.254555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.254697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.254723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.254849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.254949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.254972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.255110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.255288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.255315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.255441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.255550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.255574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.255708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.255828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.255856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.255962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.256078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.256104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.256234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.256365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.256396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.256520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.256634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.256661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.256774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.256883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.256914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.257063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.257174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.257214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.257322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.257473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.257498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.257654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.257809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.257832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.257938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.258030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.258053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.258150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.258267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.258291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.258424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.258538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.258562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.258661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.258754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.258777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.258907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.259053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.259080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.259215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.259343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.259367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.259483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.259596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.259619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.259796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.259954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.259977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.260105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.260258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.260285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.260397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.260581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.260608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.260752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.260854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.260879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.261004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.261188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.261211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.261334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.261445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.261488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.261667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.261796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.261819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.261953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.262101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.262141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.710 [2024-04-18 17:07:58.262299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.262405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.710 [2024-04-18 17:07:58.262430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.710 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.262572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.262687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.262713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.262872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.263001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.263025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.263148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.263277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.263303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.263438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.263541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.263564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.263684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.263825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.263852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.263970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.264115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.264138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.264268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.264411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.264442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.264596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.264739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.264765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.264910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.265012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.265039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.265191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.265318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.265341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.265472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.265603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.265628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125af30 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.265816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.265947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.265977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.266098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.266203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.266233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.266400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.266561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.266595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.266771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.266887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.266913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.267127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.267258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.267287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.267405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.267550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.267584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.267703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.267827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.267858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.268008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.268150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.268175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.268321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.268476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.268509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.268648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.268775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.268807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.268965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.269108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.269137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.269278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.269433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.269463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.269582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.269724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.269753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.269867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.270033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.270060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.270182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.270343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.270374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.270535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.270654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.270683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.270885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.271010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.271043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.271195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.271374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.271414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.711 [2024-04-18 17:07:58.271536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.271697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.711 [2024-04-18 17:07:58.271724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.711 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.271859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.271997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.272023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.272159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.272318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.272346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.272487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.272644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.272669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.272826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.272942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.272970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.273099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.273230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.273255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.273420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.273554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.273579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.273733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.273877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.273905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.274024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.274137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.274166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.274324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.274455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.274482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.274596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.274701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.274728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.274921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.275041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.275068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.275191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.275307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.275333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.275471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.275596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.275622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.275804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.275962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.275987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.276130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.276286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.276314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.276463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.276575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.276603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.276757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.276884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.276910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.277041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.277180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.277220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.277359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.277513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.277542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.277691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.277809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.277838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.277968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.278073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.278098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.278200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.278328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.278353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.278524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.278652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.278695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.278828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.278962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.278988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.279095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.279258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.279284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.279415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.279592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.279617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.279770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.279927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.279952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.712 qpair failed and we were unable to recover it. 00:20:42.712 [2024-04-18 17:07:58.280113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.712 [2024-04-18 17:07:58.280254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.280279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.280420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.280551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.280576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.280709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.280886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.280914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.281053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.281172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.281200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.281376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.281504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.281534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.281715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.281829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.281855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.281988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.282087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.282112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.282246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.282349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.282374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.282545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.282706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.282733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.282885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.283016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.283041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.283164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.283309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.283338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.283510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.283663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.283691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.283865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.283979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.284007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.284120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.284249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.284274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.284394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.284531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.284560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.284669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.284828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.284853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.284960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.285080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.285107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.285263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.285395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.285422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.285549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.285697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.285725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.285870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.286022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.286050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.286164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.286281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.286308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.286476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.286590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.286617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.286778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.286962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.286990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.287114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.287226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.287254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.287457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.287607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.287641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.287767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.287897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.287922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.288030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.288129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.288154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.288253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.288408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.288437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.713 [2024-04-18 17:07:58.288597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.288764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.713 [2024-04-18 17:07:58.288791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.713 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.288922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.289081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.289106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.289262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.289453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.289480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.289644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.289816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.289843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.290000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.290156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.290183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.290332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.290449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.290476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.290610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.290757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.290790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.290931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.291061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.291087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.291219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.291318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.291345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.291511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.291613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.291638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.291772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.291902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.291927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.292033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.292140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.292165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.292323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.292480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.292509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.292660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.292815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.292840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.292965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.293112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.293140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.293256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.293376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.293407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.293553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.293710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.293747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.293870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.294012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.294037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.294171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.294346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.294374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.294517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.294650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.294692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.294815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.294979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.295007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.295123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.295255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.295280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.295415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.295547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.295575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.295690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.295859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.295887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.296055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.296223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.296250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.296445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.296604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.296631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.296743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.296852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.296892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.714 qpair failed and we were unable to recover it. 00:20:42.714 [2024-04-18 17:07:58.297035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.714 [2024-04-18 17:07:58.297194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.297219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.297401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.297552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.297580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.297735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.297847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.297873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.297993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.298172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.298197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.298358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.298509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.298535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.298642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.298823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.298851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.299011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.299171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.299197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.299332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.299489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.299519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.299700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.299846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.299873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.300013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.300164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.300193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.300326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.300498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.300530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.300690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.300808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.300835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.300993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.301157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.301183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.301312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.301492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.301521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.301648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.301780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.301805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.301975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.302084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.302112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.302288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.302449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.302475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.302585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.302693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.302718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.302850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.302986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.303011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.303153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.303307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.303332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.303476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.303675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.303700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.303808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.303939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.303963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.304094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.304204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.304229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.304362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.304545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.304571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.304714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.304862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.304887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.305013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.305166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.305191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.305340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.305509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.305535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.305644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.305790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.305818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.305963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.306097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.306125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.306231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.306386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.306411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.306563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.306667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.306692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.306846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.307029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.307055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.307162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.307320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.307345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.307500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.307676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.307704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.307839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.308004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.308030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.308181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.308362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.308394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.715 qpair failed and we were unable to recover it. 00:20:42.715 [2024-04-18 17:07:58.308532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.715 [2024-04-18 17:07:58.308658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.308686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.308802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.308964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.308988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.309121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.309251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.309277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.309441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.309572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.309598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.309801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.309948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.309974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.310135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.310278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.310305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.310466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.310631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.310674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.310829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.311011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.311038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.311211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.311357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.311391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.311542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.311685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.311713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.311858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.311987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.312012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.312173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.312341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.312367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.312534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.312667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.312694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.312865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.313007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.313034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.313163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.313296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.313321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.313479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.313640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.313665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.313821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.313923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.313948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.314128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.314282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.314311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.314447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.314556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.314581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.314710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.314843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.314868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.314969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.315139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.315167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.315309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.315455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.315484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.315641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.315750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.315775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.315910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.316078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.316105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.316255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.316410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.316452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.316590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.316768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.316796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.316922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.317058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.317084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.317244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.317389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.317433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.317561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.317728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.716 [2024-04-18 17:07:58.317753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.716 qpair failed and we were unable to recover it. 00:20:42.716 [2024-04-18 17:07:58.317886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.318020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.318048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.318207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.318341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.318393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.318553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.318656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.318681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.318811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.318973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.319001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.319149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.319271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.319299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.319463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.319600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.319625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.319775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.319926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.319954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.320082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.320194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.320223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.320369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.320509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.320535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.320639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.320796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.320821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.320944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.321089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.321116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.321259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.321446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.321472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.321601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.321735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.321761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.321918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.322096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.322124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.322272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.322411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.322440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.322559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.322708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.322736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.322859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.323004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.323032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.323193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.323357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.323423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.323580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.323741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.323766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.323897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.324062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.324086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.324206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.324394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.324422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.324550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.324656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.324681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.324786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.324914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.324939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.325095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.325213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.325242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.325395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.325563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.325588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.325752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.325860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.325885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.325987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.326122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.326147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.326281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.326426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.326455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.326601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.326755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.326781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.326887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.327021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.327046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.717 qpair failed and we were unable to recover it. 00:20:42.717 [2024-04-18 17:07:58.327204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.717 [2024-04-18 17:07:58.327317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.327346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.327527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.327680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.327708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.327858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.327970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.327999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.328187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.328345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.328369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.328504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.328608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.328633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.328784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.328913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.328943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.329055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.329233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.329258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.329364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.329480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.329507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.329641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.329785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.329814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.329966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.330113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.330141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.330284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.330399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.330428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.330592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.330697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.330722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.330884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.331055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.331081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.331215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.331365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.331413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.331561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.331728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.331756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.331888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.332023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.332055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.332208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.332394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.332420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.332524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.332681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.332706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.332836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.332987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.333016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.333153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.333313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.333338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.333538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.333700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.333725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.333881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.334024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.334053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.334223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.334362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.334397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.718 qpair failed and we were unable to recover it. 00:20:42.718 [2024-04-18 17:07:58.334582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.334754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.718 [2024-04-18 17:07:58.334782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.334894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.335053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.335078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.335193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.335345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.335378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.335549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.335693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.335720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.335842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.335944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.335971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.336132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.336269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.336297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.336422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.336565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.336593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.336705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.336862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.336890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.337019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.337177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.337218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.337330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.337505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.337534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.337650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.337792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.337820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.337991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.338134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.338162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.338286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.338397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.338427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.338583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.338726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.338753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.338893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.339013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.339042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.339212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.339361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.339413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.339547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.339694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.339720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.339855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.339986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.340012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.340109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.340241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.340266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.340424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.340544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.340572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.340701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.340836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.340861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.341039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.341232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.341257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.341391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.341499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.341531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.341635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.341736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.341762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.341888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.342020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.342045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.342208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.342368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.342403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.342524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.342647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.342675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.342818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.342941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.342969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.343099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.343205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.343231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.343343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.343499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.343528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.343646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.343804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.343829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.343959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.344055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.344096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.344252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.344357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.344390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.344529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.344684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.344709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.344837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.344988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.345015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.345183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.345302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.345330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.719 qpair failed and we were unable to recover it. 00:20:42.719 [2024-04-18 17:07:58.345487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.719 [2024-04-18 17:07:58.345647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.345672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.345824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.345968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.345996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.346145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.346293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.346323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.346469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.346591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.346619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.346770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.346905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.346930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.347062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.347245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.347273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.347423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.347577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.347605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.347759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.347899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.347927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.348084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.348222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.348265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.348402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.348575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.348604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.348746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.348930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.348955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.349059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.349209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.349239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.349395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.349506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.349532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.349659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.349814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.349841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.349988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.350114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.350139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.350266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.350392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.350418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.350550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.350707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.350732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.350898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.351034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.351062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.351235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.351402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.351428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.351528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.351633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.351660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.351792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.351920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.351945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.352077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.352261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.352289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.352435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.352603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.352631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.352781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.352897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.352924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.353077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.353212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.353237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.353367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.353550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.353575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.353691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.353817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.353843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.354024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.354165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.354194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.354345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.354452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.354478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.354612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.354727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.354756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.354867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.355040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.355068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.355213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.355354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.355390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.355520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.355650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.355676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.355832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.355950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.355980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.356130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.356251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.356279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.720 [2024-04-18 17:07:58.356454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.356572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.720 [2024-04-18 17:07:58.356601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.720 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.356753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.356880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.356906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.357042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.357225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.357253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.357367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.357497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.357525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.357648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.357790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.357818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.357969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.358076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.358101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.358224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.358404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.358432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.358580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.358689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.358718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.358827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.358975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.359002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.359131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.359237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.359262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.359400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.359514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.359540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.359646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.359770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.359795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.359943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.360104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.360133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.360290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.360432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.360468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.360599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.360783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.360811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.360971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.361076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.361103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.361234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.361408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.361438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.361588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.361692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.361718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.361827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.361988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.362013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.362135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.362287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.362313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.362448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.362558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.362584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.362692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.362819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.362845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.362982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.363134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.363162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.363329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.363441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.363467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.363656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.363828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.363855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.364031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.364163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.364203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.364358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.364475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.364500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.364637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.364737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.364763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.364892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.365052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.365080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.365225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.365329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.365354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.365544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.365698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.365724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.365885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.366004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.366033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.366210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.366353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.366391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.366509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.366611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.366637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.366767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.366873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.366897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.721 qpair failed and we were unable to recover it. 00:20:42.721 [2024-04-18 17:07:58.367001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.367109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.721 [2024-04-18 17:07:58.367134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.367264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.367399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.367425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.367583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.367686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.367711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.367845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.367980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.368005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.368129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.368300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.368327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.368473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.368615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.368643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.368795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.368960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.368985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.369143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.369278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.369303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.369413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.369598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.369627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.369802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.370023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.370070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.370248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.370352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.370378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.370574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.370778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.370826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.371081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.371224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.722 [2024-04-18 17:07:58.371251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:42.722 qpair failed and we were unable to recover it. 00:20:42.722 [2024-04-18 17:07:58.371374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.371563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.371590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.371718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.371849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.371875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.371978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.372099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.372126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.372296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.372441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.372470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.372617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.372743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.372772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.372952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.373063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.373087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.373197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.373328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.373352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.373467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.373572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.373596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.373749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.373886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.373913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.374055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.374160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.374184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.374343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.374461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.374488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.374636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.374773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.374800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.374980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.375125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.375152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.375305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.375407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.375433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.375577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.375756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.005 [2024-04-18 17:07:58.375781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.005 qpair failed and we were unable to recover it. 00:20:43.005 [2024-04-18 17:07:58.375928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.376064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.376088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.376198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.376335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.376362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.376532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.376660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.376685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.376793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.376922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.376946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.377093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.377262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.377289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.377407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.377512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.377539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.377667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.377827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.377852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.378013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.378145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.378170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.378276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.378440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.378465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.378622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.378814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.378839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.378945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.379100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.379124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.379314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.379463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.379491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.379630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.379784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.379810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.379940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.380131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.380158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.380312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.380446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.380471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.380579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.380734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.380759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.380922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.381103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.381130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.381295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.381454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.381480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.381657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.381788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.381830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.382015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.382145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.382174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.382327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.382498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.382526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.382708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.382852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.382879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.383034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.383212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.383239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.383352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.383539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.383567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.383741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.383872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.383898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.384043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.384195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.384221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.006 qpair failed and we were unable to recover it. 00:20:43.006 [2024-04-18 17:07:58.384376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.006 [2024-04-18 17:07:58.384541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.384580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.384750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.384920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.384947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.385092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.385232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.385259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.385443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.385551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.385580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.385714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.385872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.385914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.386031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.386180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.386205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.386339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.386447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.386472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.386629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.386766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.386792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.386962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.387091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.387132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.387326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.387450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.387475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.387603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.387726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.387752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.387891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.388063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.388091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.388217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.388349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.388372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.388503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.388673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.388704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.388823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.388958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.388983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.389168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.389294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.389318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.389471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.389604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.389628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.389761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.389861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.389884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.390037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.390204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.390230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.390372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.390518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.390542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.390730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.390859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.390883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.391039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.391184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.391210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.391324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.391439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.391466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.391606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.391779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.391811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.007 [2024-04-18 17:07:58.391960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.392086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.007 [2024-04-18 17:07:58.392111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.007 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.392221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.392386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.392412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.392599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.392731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.392756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.392931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.393075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.393101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.393257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.393391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.393416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.393580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.393712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.393738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.393874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.394002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.394026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.394175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.394334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.394359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.394503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.394603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.394627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.394756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.394889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.394912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.395107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.395275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.395303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.395473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.395648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.395675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.395826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.395960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.395984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.396152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.396314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.396338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.396496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.396668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.396692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.396897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.397051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.397076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.397182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.397315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.397341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.397483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.397591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.397616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.397745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.397881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.397930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.398044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.398185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.398212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.398371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.398505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.398529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.398662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.398789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.398813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.398946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.399073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.399099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.399214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.399352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.399379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.399588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.399696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.399720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.399876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.400014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.008 [2024-04-18 17:07:58.400041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.008 qpair failed and we were unable to recover it. 00:20:43.008 [2024-04-18 17:07:58.400182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.400321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.400347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.400531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.400644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.400671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.400820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.400995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.401022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.401169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.401313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.401339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.401496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.401634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.401660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.401822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.402019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.402043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.402177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.402302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.402325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.402524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.402630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.402654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.402850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.403013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.403039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.403185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.403328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.403354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.403496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.403627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.403651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.403780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.403929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.403956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.404098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.404249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.404272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.404373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.404551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.404575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.404733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.404887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.404910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.405059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.405200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.405228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.405408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.405575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.405599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.405733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.405870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.405894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.406000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.406124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.406147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.406329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.406448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.406476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.406650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.406796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.406848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.407002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.407111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.407137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.407292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.407423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.407464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.407614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.407763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.407789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.407929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.408083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.408107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.009 qpair failed and we were unable to recover it. 00:20:43.009 [2024-04-18 17:07:58.408232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.408365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.009 [2024-04-18 17:07:58.408399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.408501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.408627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.408651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.408783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.408929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.408955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.409108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.409232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.409256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.409361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.409492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.409518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.409679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.409800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.409840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.409986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.410135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.410163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.410299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.410440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.410469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.410613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.410782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.410809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.410943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.411074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.411101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.411212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.411344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.411370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.411557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.411700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.411727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.411872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.412058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.412082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.412208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.412363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.412412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.412562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.412698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.412725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.412874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.413048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.413100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.413274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.413423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.413451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.413573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.413673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.413699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.413822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.413982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.414006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.414173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.414281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.010 [2024-04-18 17:07:58.414306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.010 qpair failed and we were unable to recover it. 00:20:43.010 [2024-04-18 17:07:58.414457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.414603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.414631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.414788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.414920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.414943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.415097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.415265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.415292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.415478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.415648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.415676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.415825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.415965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.415992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.416145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.416251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.416275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.416409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.416581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.416607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.416748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.416979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.417036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.417183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.417352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.417378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.417536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.417670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.417695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.417828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.417962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.417987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.418141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.418316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.418343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.418485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.418643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.418667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.418822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.418946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.418985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.419153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.419319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.419346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.419504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.419649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.419676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.419804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.419943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.419970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.420093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.420200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.420224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.420334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.420465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.420490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.420620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.420765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.420792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.420941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.421113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.421141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.421266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.421420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.421445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.421603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.421778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.421803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.421958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.422134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.422159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.422317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.422499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.422526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.011 qpair failed and we were unable to recover it. 00:20:43.011 [2024-04-18 17:07:58.422668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.422771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.011 [2024-04-18 17:07:58.422795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.422966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.423080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.423107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.423249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.423396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.423424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.423579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.423725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.423752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.423878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.423985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.424009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.424196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.424364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.424400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.424553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.424728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.424755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.424927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.425074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.425097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.425227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.425393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.425419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.425568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.425711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.425737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.425959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.426106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.426133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.426279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.426395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.426423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.426568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.426705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.426729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.426836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.426990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.427016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.427156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.427305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.427333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.427515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.427679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.427705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.427864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.427968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.427992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.428127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.428258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.428281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.428411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.428606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.428631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.428765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.428902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.428931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.429057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.429193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.429218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.429354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.429516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.429543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.429666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.429804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.429830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.430015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.430148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.430172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.430304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.430461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.430504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.430618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.430753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.430780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.012 qpair failed and we were unable to recover it. 00:20:43.012 [2024-04-18 17:07:58.430949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.012 [2024-04-18 17:07:58.431061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.431087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.431205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.431350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.431377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.431535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.431646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.431670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.431824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.431981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.432008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.432165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.432325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.432352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.432521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.432668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.432695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.432829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.432987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.433012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.433172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.433299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.433323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.433485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.433666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.433698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.433855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.433956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.433981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.434090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.434223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.434249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.434410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.434581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.434606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.434764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.434960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1765359 Killed "${NVMF_APP[@]}" "$@" 00:20:43.013 [2024-04-18 17:07:58.435010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.435155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.435304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.435331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.435491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.435602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 17:07:58 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:20:43.013 [2024-04-18 17:07:58.435628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.435757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.435892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.435917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 17:07:58 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.436074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.436218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 17:07:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:43.013 [2024-04-18 17:07:58.436246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.436431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 17:07:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:43.013 [2024-04-18 17:07:58.436569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.436599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:20:43.013 [2024-04-18 17:07:58.436770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.436898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.436923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.437083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.437254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.437278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.437420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.437613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.437642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.437811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.437982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.438010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.438166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.438298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.438323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.438501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.438610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.438635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.013 qpair failed and we were unable to recover it. 00:20:43.013 [2024-04-18 17:07:58.438767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.013 [2024-04-18 17:07:58.438875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.438900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.439028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.439185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.439211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.439405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.439516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.439541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.439676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.439827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.439860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.440093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.440295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.440323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.440502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.440606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.440632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.440789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.440893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.440917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.441100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.441283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.441309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.441482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.441619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.441649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 17:07:58 -- nvmf/common.sh@470 -- # nvmfpid=1765927 00:20:43.014 17:07:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:43.014 [2024-04-18 17:07:58.441795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 17:07:58 -- nvmf/common.sh@471 -- # waitforlisten 1765927 00:20:43.014 [2024-04-18 17:07:58.441979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.442008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 17:07:58 -- common/autotest_common.sh@817 -- # '[' -z 1765927 ']' 00:20:43.014 [2024-04-18 17:07:58.442140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 17:07:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.014 [2024-04-18 17:07:58.442276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.442300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 17:07:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:43.014 [2024-04-18 17:07:58.442433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 17:07:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.014 [2024-04-18 17:07:58.442590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.442615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 17:07:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:43.014 [2024-04-18 17:07:58.442781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:20:43.014 [2024-04-18 17:07:58.442950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.442977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.443137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.443267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.443291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.443420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.443553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.443577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.443734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.443885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.443914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.444100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.444232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.444257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.444420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.444561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.444585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.444700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.444835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.444860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.445024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.445158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.445182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.445315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.445433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.445462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.445595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.445734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.445766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.445926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.446039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.014 [2024-04-18 17:07:58.446065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.014 qpair failed and we were unable to recover it. 00:20:43.014 [2024-04-18 17:07:58.446192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.446303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.446331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.446474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.446619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.446646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.446756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.446894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.446921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.447053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.447150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.447175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.447309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.447451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.447478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.447622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.447747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.447774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.447891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.448036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.448064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.448215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.448347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.448372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.448560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.448668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.448701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.448816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.448929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.448955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.449080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.449259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.449284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.449408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.449517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.449542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.449706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.449814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.449838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.449944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.450047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.450071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.450178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.450313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.450342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.450491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.450596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.450620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.450777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.450898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.450922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.015 qpair failed and we were unable to recover it. 00:20:43.015 [2024-04-18 17:07:58.451031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.015 [2024-04-18 17:07:58.451154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.451182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.016 qpair failed and we were unable to recover it. 00:20:43.016 [2024-04-18 17:07:58.451326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.451467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.451500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.016 qpair failed and we were unable to recover it. 00:20:43.016 [2024-04-18 17:07:58.451630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.451793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.451817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.016 qpair failed and we were unable to recover it. 00:20:43.016 [2024-04-18 17:07:58.451946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.452085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.452111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.016 qpair failed and we were unable to recover it. 00:20:43.016 [2024-04-18 17:07:58.452220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.452363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.452398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.016 qpair failed and we were unable to recover it. 00:20:43.016 [2024-04-18 17:07:58.452542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.452692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.452719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.016 qpair failed and we were unable to recover it. 00:20:43.016 [2024-04-18 17:07:58.452874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.453005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.453028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.016 qpair failed and we were unable to recover it. 00:20:43.016 [2024-04-18 17:07:58.453155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.453283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.453307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.016 qpair failed and we were unable to recover it. 00:20:43.016 [2024-04-18 17:07:58.453435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.016 [2024-04-18 17:07:58.453580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.453608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.453778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.453896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.453922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.454103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.454238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.454262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.454422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.454592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.454617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.454759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.454911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.454938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.455071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.455181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.455207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.455331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.455459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.455484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.455623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.455732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.455756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.455877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.456017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.456042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.456170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.456314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.456340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.456502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.456604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.456627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.456795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.456946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.456970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.457100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.457225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.457253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.457361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.457536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.457563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.457695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.457825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.457849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.457984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.458138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.458162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.458264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.458403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.458428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.458555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.458728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.458755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.458888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.458998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.459023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.459126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.459306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.459334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.459482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.459617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.459644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.459812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.459922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.459949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.460102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.460228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.460252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.460424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.460534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.460560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.460732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.460872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.460899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.461071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.461190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.461216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.017 qpair failed and we were unable to recover it. 00:20:43.017 [2024-04-18 17:07:58.461392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.017 [2024-04-18 17:07:58.461519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.461543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.461679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.461814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.461837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.461945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.462067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.462094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.462241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.462400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.462428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.462604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.462747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.462789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.462911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.463029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.463057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.463243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.463349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.463373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.463536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.463660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.463688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.463847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.463980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.464005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.464122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.464272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.464299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.464423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.464536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.464563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.464703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.464853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.464879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.465031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.465161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.465202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.465359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.465504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.465530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.465636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.465748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.465772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.465903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.466058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.466081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.466230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.466331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.466355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.466495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.466644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.466669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.466793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.466924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.466949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.467076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.467235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.467260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.467397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.467507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.467532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.467702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.467804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.467828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.467939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.468041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.468065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.468174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.468299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.468324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.468456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.468588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.468612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.468770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.468883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.018 [2024-04-18 17:07:58.468907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.018 qpair failed and we were unable to recover it. 00:20:43.018 [2024-04-18 17:07:58.469033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.469149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.469178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.469298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.469470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.469498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.469637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.469743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.469770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.469917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.470032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.470058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.470178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.470300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.470327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.470474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.470603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.470630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.470827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.471003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.471030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.471186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.471281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.471306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.471430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.471539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.471564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.471724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.471835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.471862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.472048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.472149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.472175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.472307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.472442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.472471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.472662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.472783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.472811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.472954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.473125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.473150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.473303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.473415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.473440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.473572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.473703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.473727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.473836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.473938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.473964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.474090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.474214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.474241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.474401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.474506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.474532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.474705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.474834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.474859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.474965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.475109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.475136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.475282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.475433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.475461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.475616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.475749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.475774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.475946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.476063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.476090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.476261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.476375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.476412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.019 qpair failed and we were unable to recover it. 00:20:43.019 [2024-04-18 17:07:58.476535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.476683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.019 [2024-04-18 17:07:58.476710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.476862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.476999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.477023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.477127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.477229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.477254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.477388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.477530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.477557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.477676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.477792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.477820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.477950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.478047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.478071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.478226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.478359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.478400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.478533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.478657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.478698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.478831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.478939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.478963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.479069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.479168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.479193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.479330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.479474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.479502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.479624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.479764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.479792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.479919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.480037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.480064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.480222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.480388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.480413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.480537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.480678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.480706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.480862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.480997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.481022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.481150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.481297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.481324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.481450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.481558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.481584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.481719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.481865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.481893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.482050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.482178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.482204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.482331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.482495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.482520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.482651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.482785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.482810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.482964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.483083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.483110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.483230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.483387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.483413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.483527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.483662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.483687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.483787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.483884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.483908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.020 qpair failed and we were unable to recover it. 00:20:43.020 [2024-04-18 17:07:58.484035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.484137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.020 [2024-04-18 17:07:58.484162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.484313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.484482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.484510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.484656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.484766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.484794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.484939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.485034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.485057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.485200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.485340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.485367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.485490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.485600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.485626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.485776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.485925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.485950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.486060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.486162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.486185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.486314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.486439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.486464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.486587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.486711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.486738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.486906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.487052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.487080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.487201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.487354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.487408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.487452] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:43.021 [2024-04-18 17:07:58.487528] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.021 [2024-04-18 17:07:58.487552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.487702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.487727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.487876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.488004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.488027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.488140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.488295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.488319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.488426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.488555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.488580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.488737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.488859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.488886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.489036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.489192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.489217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.489324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.489462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.489487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.489620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.489749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.489773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.021 qpair failed and we were unable to recover it. 00:20:43.021 [2024-04-18 17:07:58.489906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.490034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.021 [2024-04-18 17:07:58.490065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.490190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.490335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.490361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.490509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.490625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.490653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.490770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.490877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.490901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.491095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.491217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.491243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.491392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.491509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.491538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.491696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.491805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.491833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.491983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.492114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.492139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.492253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.492361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.492408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.492528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.492632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.492658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.492799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.492939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.492964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.493131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.493233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.493258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.493359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.493503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.493528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.493643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.493769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.493796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.493941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.494074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.494099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.494239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.494371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.494404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.494528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.494646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.494673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.494804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.494949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.494976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.495116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.495293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.495318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.495435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.495571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.495595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.495775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.495918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.495947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.496078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.496214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.496241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.496366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.496551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.496578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.496707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.496833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.496857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.496986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.497130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.497157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.497305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.497442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.497467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.022 qpair failed and we were unable to recover it. 00:20:43.022 [2024-04-18 17:07:58.497618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.497770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.022 [2024-04-18 17:07:58.497796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.497906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.498007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.498032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.498186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.498303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.498331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.498492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.498598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.498624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.498792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.498907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.498934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.499078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.499192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.499216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.499326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.499438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.499463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.499564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.499701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.499725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.499871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.499998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.500024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.500158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.500261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.500285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.500430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.500546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.500573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.500688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.500848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.500873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.501009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.501168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.501196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.501346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.501481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.501506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.501631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.501754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.501781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.501892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.502018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.502045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.502168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.502280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.502307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.502440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.502540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.502565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.502678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.502796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.502820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.502922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.503021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.503045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.503155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.503274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.503301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.503436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.503544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.503568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.503678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.503807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.503834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.503953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.504067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.504094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.504207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.504361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.504406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.023 [2024-04-18 17:07:58.504537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.504647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.023 [2024-04-18 17:07:58.504672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.023 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.504800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.504913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.504940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.505053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.505209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.505233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.505357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.505462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.505487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.505584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.505717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.505741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.505875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.506049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.506073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.506210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.506363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.506397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.506532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.506649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.506675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.506815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.506919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.506942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.507071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.507179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.507203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.507378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.507515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.507539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.507634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.507743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.507767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.507901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.508003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.508027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.508146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.508300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.508325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.508456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.508616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.508642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.508752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.508865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.508892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.509020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.509127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.509152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.509282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.509457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.509485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.509620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.509738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.509765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.509909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.510045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.510072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.510222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.510336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.510371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.510524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.510626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.510650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.510757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.510873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.510896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.511011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.511150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.511174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.511311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.511415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.511439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.511545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.511644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.511668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.024 qpair failed and we were unable to recover it. 00:20:43.024 [2024-04-18 17:07:58.511850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.024 [2024-04-18 17:07:58.511970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.511997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.512136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.512278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.512305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.512441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.512557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.512583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.512686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.512820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.512844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.512884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1268860 (9): Bad file descriptor 00:20:43.025 [2024-04-18 17:07:58.513115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.513237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.513266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.513393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.513501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.513527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.513637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.513781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.513807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.513942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.514043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.514068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.514174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.514307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.514334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.514450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.514562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.514588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.514692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.514798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.514824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.514930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.515039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.515064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.515180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.515286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.515313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.515442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.515548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.515574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.515694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.515798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.515825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.515957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.516074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.516102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.516236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.516346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.516371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.516502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.516611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.516636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.516743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.516850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.516875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.516996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.517098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.517123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.517256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.517364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.517405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.517518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.517622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.517647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.517786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.517920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.517946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.518080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.518188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.518213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.518333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.518454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.518480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.025 [2024-04-18 17:07:58.518588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.518696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.025 [2024-04-18 17:07:58.518721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.025 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.518852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.518954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.518981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.519112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.519249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.519274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.519396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.519512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.519539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.519647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.519747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.519773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.519880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.520010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.520047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.520178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.520291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.520316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.520425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.520533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.520559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.520665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.520832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.520861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.521023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.521131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.521157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.521268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.521363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.521403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.521513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.521609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.521634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.521781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.521912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.521937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.522095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.522225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.522268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.522430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.522565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.522590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.522690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.522802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.522827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.522928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.523034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.523059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.523163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.523265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.523291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.523396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.523495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.523522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.523660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.523777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.523802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.026 qpair failed and we were unable to recover it. 00:20:43.026 [2024-04-18 17:07:58.523907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.026 [2024-04-18 17:07:58.524037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.524062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.524167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.524261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.524286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.524392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.027 [2024-04-18 17:07:58.524501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.524528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.524641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.524750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.524775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.524915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.525012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.525038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.525144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.525248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.525274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.525424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.525558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.525584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.525686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.525792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.525817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.525915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.526024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.526049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.526161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.526270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.526295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.526414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.526528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.526556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.526672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.526782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.526807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.526911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.527021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.527046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.527176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.527281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.527304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.527423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.527528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.527554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.527650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.527756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.527780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.527889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.527998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.528024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.528128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.528237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.528261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.528370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.528511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.528536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.528681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.528787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.528823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.528941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.529042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.529066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.529169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.529297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.529321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.529432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.529537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.529562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.529690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.529793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.529818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.529930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.530032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.530056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.530165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.530291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.530316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.027 qpair failed and we were unable to recover it. 00:20:43.027 [2024-04-18 17:07:58.530426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.027 [2024-04-18 17:07:58.530522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.530546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.530650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.530753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.530778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.530925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.531022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.531046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.531155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.531287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.531311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.531427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.531559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.531583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.531688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.531824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.531848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.531963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.532074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.532098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.532220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.532320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.532344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.532463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.532565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.532589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.532693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.532799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.532823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.532934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.533050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.533074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.533206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.533309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.533334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.533441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.533544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.533568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.533677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.533792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.533817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.533913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.534027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.534051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.534188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.534298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.534325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.534437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.534546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.534571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.534683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.534809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.534834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.534951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.535049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.535074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.535226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.535346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.535370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.535483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.535610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.535635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.535754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.535857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.535882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.535993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.536090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.536114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.536232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.536336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.536361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.536471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.536577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.536602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.536735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.536838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.028 [2024-04-18 17:07:58.536862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.028 qpair failed and we were unable to recover it. 00:20:43.028 [2024-04-18 17:07:58.536999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.537116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.537140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.537249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.537361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.537394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.537499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.537607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.537631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.537760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.537876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.537900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.538005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.538099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.538124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.538236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.538379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.538416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.538522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.538629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.538654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.538784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.538893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.538918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.539050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.539183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.539210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.539316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.539438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.539463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.539567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.539668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.539693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.539792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.539929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.539956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.540063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.540159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.540183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.540310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.540429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.540455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.540570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.540687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.540712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.540832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.540973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.540998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.541113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.541213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.541238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.541338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.541468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.541493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.541636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.541777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.541801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.541930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.542040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.542065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.542170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.542279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.542304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.542425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.542533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.542558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.542659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.542787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.542812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.542955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.543061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.543085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.543219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.543321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.543346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.543466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.543576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.543601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.543717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.543854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.543879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.029 [2024-04-18 17:07:58.544007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.544141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.029 [2024-04-18 17:07:58.544166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.029 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.544271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.544378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.544422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.544523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.544622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.544647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.544784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.544920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.544944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.545064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.545196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.545220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.545366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.545521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.545546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.545646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.545752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.545776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.545908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.546039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.546064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.546173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.546303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.546328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.546477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.546573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.546598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.546749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.546861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.546889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.547032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.547132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.547156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.547263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.547404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.547431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.547535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.547632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.547656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.547786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.547922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.547947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.548086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.548192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.548216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.548323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.548437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.548462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.548556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.548664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.548688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.548823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.548928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.548952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.549094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.549210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.549234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.549332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.549436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.549465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.549572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.549700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.549725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.549840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.549968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.549993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.550093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.550197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.550222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.550391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.550489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.550514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.030 qpair failed and we were unable to recover it. 00:20:43.030 [2024-04-18 17:07:58.550642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.030 [2024-04-18 17:07:58.550746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.550772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.550878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.551013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.551037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.551176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.551345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.551369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.551488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.551591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.551616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.551751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.551904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.551929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.552064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.552173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.552204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.552325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.552435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.552461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.552570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.552683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.552709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.552856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.553014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.553039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.553142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.553275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.553301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.553433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.553562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.553587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.553723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.553864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.553889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.554035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.554165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.554190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.554326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.554502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.554528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.554637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.554778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.554803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.554939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.555038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.555067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.555192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.555324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.555349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.555468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.555603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.555628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.555746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.555848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.555874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.556016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.556122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.556147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.556257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.556393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.556418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.556522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.556629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.556653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.556786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.556927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.556951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.031 qpair failed and we were unable to recover it. 00:20:43.031 [2024-04-18 17:07:58.557079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.031 [2024-04-18 17:07:58.557183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.557207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.557321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.557439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.557465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.557620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.557750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.557774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.557915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.558048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.558072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.558207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.558305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.558330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.558443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.558546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.558571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.558689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.558800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.558826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.558971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.559108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.559132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.559235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.559335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.559360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.559481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.559614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.559640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.559746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.559863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.559887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.559971] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.032 [2024-04-18 17:07:58.559991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.560088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.560112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.560222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.560349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.560394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.560502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.560634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.560661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.560777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.560920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.560945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.561075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.561213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.561238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.561389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.561517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.561542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.561673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.561787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.561811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.561924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.562035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.562062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.562172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.562331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.562355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.562489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.562591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.562615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.562718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.562856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.562880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.562999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.563140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.563169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.563271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.563415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.563441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.032 qpair failed and we were unable to recover it. 00:20:43.032 [2024-04-18 17:07:58.563574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.032 [2024-04-18 17:07:58.563707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.563731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.563839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.563967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.563991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.564120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.564254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.564278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.564387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.564497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.564521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.564651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.564779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.564804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.564977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.565079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.565104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.565203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.565358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.565396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.565540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.565639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.565664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.565785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.565914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.565938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.566098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.566201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.566226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.566341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.566516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.566542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.566682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.566804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.566829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.566930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.567060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.567085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.567202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.567372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.567407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.567539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.567669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.567705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.567846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.567954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.567978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.568150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.568284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.568309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.568483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.568612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.568636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.568758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.568891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.568916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.569063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.569178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.569209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.569342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.569461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.569486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.569617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.569745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.569770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.569886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.570034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.570060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.570170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.570316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.570341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.570477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.570586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.570611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.570750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.570875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.570899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.571030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.571144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.571180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.033 [2024-04-18 17:07:58.571287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.571402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.033 [2024-04-18 17:07:58.571427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.033 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.571536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.571668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.571696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.571802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.571945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.571969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.572124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.572221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.572246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.572392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.572540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.572566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.572704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.572839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.572863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.572975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.573085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.573110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.573256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.573358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.573402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.573508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.573681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.573706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.573806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.573950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.573974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.574090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.574192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.574217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.574359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.574568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.574594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.574747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.574881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.574906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.575040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.575185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.575220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.575367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.575513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.575539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.575680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.575816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.575840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.575966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.576112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.576137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.576241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.576401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.576427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.576559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.576719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.576745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.576850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.576982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.577006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.577114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.577217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.577242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.577348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.577462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.577488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.577621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.577724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.577749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.577846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.577951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.577978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.578082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.578182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.578207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.578327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.578461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.578487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.578587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.578744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.578769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.578879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.579012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.034 [2024-04-18 17:07:58.579037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.034 qpair failed and we were unable to recover it. 00:20:43.034 [2024-04-18 17:07:58.579153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.579281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.579306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.579443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.579579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.579604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.579706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.579869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.579894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.579997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.580105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.580130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.580242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.580351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.580387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.580521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.580658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.580683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.580790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.580923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.580947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.581056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.581154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.581179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.581345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.581487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.581514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.581625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.581753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.581778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.581905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.582011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.582035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.582136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.582267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.582293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.582406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.582536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.582561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.582665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.582843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.582868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.582978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.583091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.583117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.583249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.583390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.583423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.583527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.583624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.583648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.583779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.583916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.583941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.584071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.584231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.584256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.584359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.584501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.584526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.584634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.584764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.584789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.584927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.585037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.585069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.585167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.585278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.585305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.585415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.585534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.585559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.585704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.585820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.585848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.585986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.586090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.586114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.586230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.586366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.586425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.035 qpair failed and we were unable to recover it. 00:20:43.035 [2024-04-18 17:07:58.586558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.035 [2024-04-18 17:07:58.586694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.586726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.586895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.586994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.587019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.587177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.587305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.587330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.587471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.587607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.587632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.587748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.587906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.587931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.588089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.588186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.588210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.588309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.588441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.588468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.588611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.588750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.588776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.588911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.589071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.589095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.589236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.589387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.589412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.589541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.589643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.589668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.589815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.589914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.589940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.590055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.590187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.590211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.590327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.590494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.590519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.590617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.590774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.590799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.590910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.591075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.591100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.591226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.591377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.591409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.591550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.591687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.591712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.591848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.591956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.591980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.592111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.592243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.592268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.592391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.592521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.592546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.592680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.592790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.036 [2024-04-18 17:07:58.592815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.036 qpair failed and we were unable to recover it. 00:20:43.036 [2024-04-18 17:07:58.592921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.593023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.593049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.593211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.593348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.593373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.593485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.593609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.593634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.593794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.593900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.593930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.594089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.594205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.594231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.594359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.594535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.594561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.594668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.594806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.594831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.594940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.595079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.595104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.595236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.595371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.595407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.595510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.595609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.595634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.595772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.595873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.595898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.596007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.596110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.596134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.596243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.596372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.596405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.596518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.596647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.596672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.596784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.596914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.596939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.597101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.597222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.597248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.597388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.597512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.597537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.597643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.597777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.597804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.597904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.598007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.598032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.598169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.598307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.598332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.598478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.598616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.598641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.598782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.598913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.598938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.599050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.599187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.599212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.599313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.599452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.599478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.599638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.599749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.599775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.599888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.600001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.600031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.600146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.600309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.600334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.037 qpair failed and we were unable to recover it. 00:20:43.037 [2024-04-18 17:07:58.600444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.600598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.037 [2024-04-18 17:07:58.600624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.600791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.600919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.600944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.601078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.601206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.601231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.601334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.601445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.601471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.601637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.601748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.601773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.601906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.602040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.602067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.602175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.602313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.602349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.602500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.602632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.602657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.602770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.602900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.602931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.603053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.603198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.603223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.603354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.603480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.603506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.603619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.603755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.603780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.603885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.604018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.604044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.604184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.604312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.604344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.604483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.604604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.604628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.604788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.604924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.604949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.605070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.605201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.605226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.605330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.605456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.605484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.605631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.605774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.605803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.605938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.606075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.606100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.606241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.606393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.606420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.606533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.606667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.606703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.606837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.606978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.607002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.607131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.607275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.607305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.607450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.607583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.607608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.607740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.607883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.607908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.608041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.608147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.608172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.038 [2024-04-18 17:07:58.608275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.608409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.038 [2024-04-18 17:07:58.608435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.038 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.608571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.608684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.608713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.608823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.608951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.608976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.609109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.609207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.609232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.609355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.609499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.609525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.609627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.609770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.609795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.609955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.610087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.610111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.610216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.610375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.610406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.610542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.610676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.610708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.610811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.610942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.610969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.611101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.611260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.611285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.611427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.611567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.611592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.611703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.611846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.611872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.611976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.612082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.612107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.612246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.612350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.612375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.612541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.612679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.612709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.612815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.612975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.613000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.613133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.613231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.613255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.613399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.613506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.613532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.613666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.613795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.613820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.613918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.614052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.614077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.614192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.614324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.614348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.614491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.614597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.614622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.614732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.614835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.614861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.615021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.615150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.615174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.615302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.615429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.615455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.615562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.615689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.615714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.615879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.615984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.616012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.616137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.616266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.616291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.616422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.616532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.039 [2024-04-18 17:07:58.616557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.039 qpair failed and we were unable to recover it. 00:20:43.039 [2024-04-18 17:07:58.616660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.616770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.616795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.616906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.617039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.617063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.617174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.617309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.617334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.617449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.617555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.617580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.617680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.617803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.617828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.617985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.618140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.618165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.618263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.618363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.618403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.618537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.618649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.618674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.618780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.618928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.618953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.619063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.619222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.619247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.619407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.619514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.619540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.619640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.619778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.619803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.619910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.620054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.620081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.620215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.620362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.620404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.620541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.620697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.620722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.620828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.620931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.620956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.621096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.621242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.621266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.621376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.621490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.621516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.621658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.621775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.621800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.621934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.622045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.622070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.622175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.622286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.622311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.622433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.622567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.622592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.622707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.622811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.622836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.622954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.623056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.623081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.623207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.623309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.623334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.623481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.623589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.623614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.623729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.623864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.623889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.624016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.624148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.624172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.624279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.624394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.624420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.624526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.624658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.040 [2024-04-18 17:07:58.624683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.040 qpair failed and we were unable to recover it. 00:20:43.040 [2024-04-18 17:07:58.624835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.624966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.624993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.625094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.625226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.625251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.625393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.625504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.625529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.625631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.625793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.625818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.625993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.626125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.626150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.626282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.626434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.626460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.626617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.626756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.626781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.626880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.627049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.627074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.627206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.627315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.627340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.627475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.627621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.627647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.627787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.627924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.627949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.628082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.628235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.628261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.628413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.628542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.628567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.628668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.628770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.628797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.628960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.629067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.629092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.629205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.629314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.629340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.629495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.629654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.629690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.629797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.629896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.629921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.630052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.630210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.630235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.630378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.630522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.630547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.630658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.630806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.630833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.630995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.631124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.631148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.631262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.631422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.631448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.631609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.631767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.631792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.041 [2024-04-18 17:07:58.631898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.632021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.041 [2024-04-18 17:07:58.632046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.041 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.632204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.632412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.632438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.632544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.632703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.632728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.632866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.632999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.633024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.633151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.633280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.633305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.633418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.633550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.633574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.633680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.633810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.633835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.633943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.634049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.634075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.634185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.634345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.634375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.634594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.634804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.634829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.634936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.635066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.635091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.635246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.635393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.635419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.635559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.635667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.635692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.635796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.635931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.635956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.636058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.636190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.636214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.636322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.636458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.636484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.636615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.636724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.636753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.636856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.636995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.637020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.637230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.637371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.637403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.637563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.637665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.637690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.637796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.637954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.637980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.638118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.638250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.638275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.638438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.638566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.638591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.638695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.638798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.638823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.638948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.639049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.639076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.639179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.639313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.639339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.639482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.639693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.639718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.639849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.639949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.639976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.640108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.640241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.640267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.640477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.640610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.640635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.640804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.641012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.641036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.042 [2024-04-18 17:07:58.641143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.641272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.042 [2024-04-18 17:07:58.641297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.042 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.641402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.641502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.641528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.641671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.641779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.641804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.641940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.642073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.642098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.642228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.642334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.642361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.642507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.642618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.642643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.642780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.642905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.642930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.643140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.643263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.643291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.643407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.643518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.643543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.643705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.643834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.643859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.643970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.644072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.644097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.644203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.644336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.644361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.644552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.644681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.644705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.644822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.644953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.644978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.645119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.645249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.645276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.645408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.645524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.645549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.645664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.645796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.645821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.645918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.646047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.646076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.646186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.646316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.646341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.646556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.646684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.646709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.646867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.647075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.647099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.647202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.647331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.647355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.647490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.647594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.647619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.647789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.647896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.647920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.648029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.648241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.648266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.648400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.648510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.648535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.648670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.648813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.648838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.648965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.649108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.649138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.649272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.649409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.649435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.649566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.649722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.649747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.649879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.649982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.650007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.650108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.650235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.650260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.650371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.650544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.043 [2024-04-18 17:07:58.650569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.043 qpair failed and we were unable to recover it. 00:20:43.043 [2024-04-18 17:07:58.650678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.650783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.650809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.650920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.651052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.651077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.651214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.651345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.651370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.651543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.651641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.651667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.651796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.651931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.651960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.652094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.652228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.652253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.652365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.652477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.652504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.652603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.652706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.652731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.652871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.653008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.653032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.653179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.653279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.653305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.653444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.653554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.653578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.653752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.653880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.653905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.654014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.654170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.654195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.654299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.654472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.654497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.654600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.654733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.654758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.654908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.655038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.655063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.655204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.655334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.655359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.655473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.655607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.655633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.655766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.655866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.655893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.655996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.656141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.656165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.656266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.656425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.656450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.656613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.656719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.656744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.656858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.656955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.656980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.657111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.657246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.657270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.657390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.657523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.657548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.657684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.657792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.657817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.657948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.658075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.658100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.658209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.658307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.658332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.658496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.658596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.658620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.658730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.658861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.658886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.659019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.659154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.659178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.659332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.659442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.659467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.044 qpair failed and we were unable to recover it. 00:20:43.044 [2024-04-18 17:07:58.659574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.044 [2024-04-18 17:07:58.659704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.659729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.659863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.659966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.659991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.660123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.660233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.660259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.660404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.660540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.660565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.660673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.660827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.660852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.660959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.661062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.661087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.661247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.661388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.661414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.661554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.661658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.661683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.661787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.661921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.661946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.662155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.662296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.662321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.662445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.662547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.662572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.662680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.662812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.662839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.662943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.663080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.663105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.663221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.663351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.663376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.663541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.663707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.663731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.663941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.664069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.664094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.664221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.664325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.664350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.664462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.664575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.664600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.664731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.664870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.664897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.665027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.665136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.665163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.665294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.665505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.665530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.665658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.665802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.665826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.665982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.666119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.666144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.666283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.666394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.666420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.666553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.666666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.666692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.666798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.666932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.666958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.667072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.667203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.667229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.045 qpair failed and we were unable to recover it. 00:20:43.045 [2024-04-18 17:07:58.667365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.045 [2024-04-18 17:07:58.667540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.667566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.667676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.667831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.667856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.667968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.668125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.668150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.668246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.668375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.668408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.668578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.668715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.668740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.668871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.669005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.669032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.669196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.669296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.669321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.669453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.669585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.669610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.669723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.669825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.669851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.669982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.670111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.670137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.670247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.670344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.670368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.670507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.670640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.670667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.670772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.670880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.670905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.671037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.671167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.671192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.671349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.671458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.671483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.671629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.671757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.671782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.671908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.672044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.672070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.672175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.672278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.672303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.672439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.672540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.672565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.672671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.672770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.672796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.672932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.673095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.673119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.673227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.673394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.673420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.673552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.673656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.673681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.673817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.673976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.674000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.674139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.674243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.674268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.674389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.674517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.674542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.674674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.674779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.674804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.674937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.675046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.675071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.675181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.675287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.675312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.675418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.675517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.675542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.675642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.675745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.675770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.046 qpair failed and we were unable to recover it. 00:20:43.046 [2024-04-18 17:07:58.675874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.675959] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.046 [2024-04-18 17:07:58.675976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.046 [2024-04-18 17:07:58.675995] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.047 [2024-04-18 17:07:58.676000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 [2024-04-18 17:07:58.676010] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.676022] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.047 [2024-04-18 17:07:58.676033] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.047 [2024-04-18 17:07:58.676111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.676089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:43.047 [2024-04-18 17:07:58.676123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:43.047 [2024-04-18 17:07:58.676239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.676148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:43.047 [2024-04-18 17:07:58.676263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 [2024-04-18 17:07:58.676151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.676371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.676483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.676507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.676667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.676770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.676795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.676919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.677017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.677042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.677151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.677285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.677310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.677467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.677575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.677601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.677730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.677828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.677854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.677960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.678061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.678086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.678220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.678341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.678365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.678470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.678578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.678603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.678707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.678814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.678839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.678943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.679072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.679096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.679213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.679312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.679337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.679453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.679577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.679601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.679706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.679831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.679857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.679966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.680071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.680097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.680231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.680360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.680391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.680504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.680603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.680628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.680761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.680866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.680890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.681101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.681312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.681337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.681479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.681613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.681638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.681747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.681957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.681982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.682091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.682221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.682246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.682352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.682462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.682487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.682596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.682724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.682749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.682882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.683011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.683035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.683161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.683267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.683292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.683423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.683540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.683565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.683699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.683836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.683862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.683967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.684117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.684141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.684255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.684359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.684393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.047 [2024-04-18 17:07:58.684510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.684617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.047 [2024-04-18 17:07:58.684642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.047 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.684776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.684881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.684905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.685005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.685104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.685128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.685246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.685389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.685415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.685523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.685622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.685647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.685857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.686015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.686040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.686199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.686329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.686354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.686460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.686562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.686586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.686688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.686823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.686848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.686947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.687051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.687075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.687184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.687286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.687311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.687475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.687575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.687599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.687730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.687834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.687858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.687965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.688123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.048 [2024-04-18 17:07:58.688147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.048 qpair failed and we were unable to recover it. 00:20:43.048 [2024-04-18 17:07:58.688255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.688392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.688419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.688521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.688621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.688648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.688753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.688962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.688987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.689114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.689238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.689262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.689369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.689482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.689506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.689605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.689701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.689725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.689874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.689976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.690001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.690114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.690229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.690254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.690362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.690522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.690547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.690649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.690758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.690783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.690894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.690993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.691017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.691116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.691216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.691242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.691348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.691514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.691540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.691650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.691747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.691772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.691923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.692048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.692073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.692173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.692308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.692332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.692444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.692547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.325 [2024-04-18 17:07:58.692573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.325 qpair failed and we were unable to recover it. 00:20:43.325 [2024-04-18 17:07:58.692715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.692817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.692842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.692950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.693047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.693071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.693176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.693304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.693328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.693441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.693543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.693568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.693668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.693795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.693819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.693914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.694016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.694043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.694160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.694265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.694290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.694404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.694506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.694530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.694643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.694762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.694787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.694919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.695020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.695045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.695163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.695298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.695323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.695433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.695538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.695563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.695708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.695810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.695835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.695942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.696101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.696126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.696259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.696406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.696432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.696561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.696663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.696688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.696834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.696939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.696963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.697065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.697167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.697191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.697297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.697401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.697426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.697530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.697630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.697654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.697758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.697862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.697887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.697992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.698100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.698124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.698256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.698361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.698391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.698525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.698627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.698651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.698780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.698885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.698909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.699016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.699119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.699143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.699274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.699421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.699447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.699551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.699650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.699675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.326 qpair failed and we were unable to recover it. 00:20:43.326 [2024-04-18 17:07:58.699772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.326 [2024-04-18 17:07:58.699924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.699948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.700072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.700174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.700198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.700307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.700418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.700444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.700551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.700658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.700683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.700800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.700899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.700923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.701033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.701142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.701166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.701279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.701413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.701438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.701552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.701655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.701679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.701787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.701916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.701941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.702052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.702263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.702287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.702410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.702512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.702536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.702697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.702798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.702822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.702923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.703078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.703103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.703313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.703423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.703453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.703586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.703720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.703746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.703879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.703980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.704005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.704114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.704216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.704242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.704360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.704502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.704528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.704652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.704785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.704809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.704966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.705071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.705097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.705204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.705333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.705357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.705505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.705604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.705629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.705782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.705914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.705943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.706044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.706142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.706168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.706287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.706414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.706440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.706652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.706767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.706792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.706948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.707047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.707071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.707178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.707310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.327 [2024-04-18 17:07:58.707334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.327 qpair failed and we were unable to recover it. 00:20:43.327 [2024-04-18 17:07:58.707442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.707548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.707573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.707706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.707835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.707860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.707965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.708067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.708092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.708197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.708316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.708340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.708486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.708626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.708655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.708761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.708860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.708885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.708986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.709195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.709219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.709357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.709461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.709486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.709598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.709730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.709755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.709902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.710027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.710051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.710156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.710259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.710285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.710393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.710505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.710530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.710664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.710787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.710811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.710948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.711048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.711072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.711186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.711326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.711355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.711475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.711582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.711606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.711715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.711826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.711851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.711954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.712084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.712108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.712217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.712373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.712405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.712521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.712625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.712651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.712785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.712891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.712915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.713042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.713142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.713168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.713285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.713399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.713426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.713535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.713635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.713660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.713769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.713871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.713896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.714033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.714175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.714199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.714305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.714514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.714539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.714645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.714776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.714800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.328 qpair failed and we were unable to recover it. 00:20:43.328 [2024-04-18 17:07:58.714912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.328 [2024-04-18 17:07:58.715018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.715043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.715145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.715254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.715278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.715388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.715505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.715533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.715653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.715758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.715783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.715917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.716030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.716055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.716188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.716300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.716325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.716434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.716567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.716592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.716702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.716801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.716826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.716937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.717043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.717068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.717178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.717288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.717313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.717429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.717542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.717567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.717672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.717803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.717828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.717939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.718052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.718077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.718189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.718319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.718344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.718463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.718563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.718588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.718699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.718830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.718855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.718983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.719099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.719123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.719269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.719377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.719408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.719524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.719624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.719648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.719755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.719861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.719885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.720041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.720151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.720185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.720325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.720466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.720492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.720598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.720697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.720721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.720822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.720947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.720971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.721078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.721204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.721229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.721335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.721442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.721468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.329 qpair failed and we were unable to recover it. 00:20:43.329 [2024-04-18 17:07:58.721571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.721702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.329 [2024-04-18 17:07:58.721727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.721835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.721994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.722018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.722124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.722253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.722277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.722386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.722501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.722526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.722627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.722740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.722767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.722906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.723011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.723035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.723142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.723288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.723313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.723417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.723571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.723596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.723731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.723841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.723865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.723970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.724099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.724124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.724238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.724343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.724368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.724512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.724619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.724644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.724752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.724859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.724886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.725001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.725156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.725181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.725310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.725460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.725486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.725588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.725716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.725740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.725951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.726054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.726080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.726184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.726293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.726317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.726437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.726551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.726578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.726733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.726865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.726891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.727000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.727155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.727180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.727283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.727390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.727416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.727524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.727628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.727654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.727767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.727881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.727906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.728047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.728148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.728173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.330 qpair failed and we were unable to recover it. 00:20:43.330 [2024-04-18 17:07:58.728274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.728378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.330 [2024-04-18 17:07:58.728409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.728525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.728746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.728771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.728875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.728983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.729008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.729120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.729330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.729355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.729506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.729615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.729641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.729772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.729885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.729910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.730048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.730158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.730183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.730294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.730426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.730452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.730566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.730695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.730720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.730818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.730952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.730977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.731081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.731193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.731217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.731353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.731503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.731528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.731639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.731747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.731773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.731882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.731994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.732019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.732151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.732259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.732285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.732400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.732516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.732541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.732662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.732768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.732793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.732900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.733011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.733036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.733165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.733267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.733293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.733421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.733538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.733563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.733664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.733766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.733792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.733896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.734033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.734057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.734169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.734280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.734304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.734409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.734516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.734540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.734651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.734785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.734809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.734943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.735048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.735072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.735175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.735281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.735306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.735411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.735518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.735544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.735675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.735780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.331 [2024-04-18 17:07:58.735804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.331 qpair failed and we were unable to recover it. 00:20:43.331 [2024-04-18 17:07:58.735903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.736007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.736031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.736141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.736252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.736276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.736408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.736552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.736576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.736695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.736803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.736828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.736932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.737034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.737059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.737171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.737269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.737294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.737416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.737517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.737542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.737658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.737765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.737790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.737898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.738032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.738057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.738186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.738284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.738309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.738418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.738554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.738580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.738685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.738785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.738812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.738914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.739025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.739050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.739154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.739252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.739277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.739407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.739520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.739544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.739650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.739777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.739801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.739905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.740016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.740041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.740168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.740316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.740341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.740459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.740562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.740587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.740743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.740854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.740878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.741014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.741139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.741163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.741296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.741401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.741426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.741537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.741642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.741667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.741772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.741909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.741934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.742070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.742176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.742202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.742311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.742417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.742443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.742558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.742667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.742692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.742802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.742912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.742936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.332 qpair failed and we were unable to recover it. 00:20:43.332 [2024-04-18 17:07:58.743053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.332 [2024-04-18 17:07:58.743206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.743231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.743361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.743467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.743492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.743600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.743730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.743755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.743865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.743966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.743991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.744095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.744196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.744221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.744326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.744426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.744452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.744552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.744688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.744713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.744828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.744935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.744962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.745072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.745180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.745206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.745312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.745437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.745466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.745572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.745675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.745700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.745808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.745938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.745962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.746080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.746183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.746207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.746360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.746472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.746497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.746613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.746713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.746737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.746835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.746931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.746956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.747067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.747166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.747190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.747303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.747433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.747458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.747566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.747672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.747697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.747807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.747943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.747972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.748077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.748190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.748214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.748318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.748423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.748449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.748559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.748663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.748687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.748792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.748921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.748946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.749079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.749290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.749315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.749426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.749563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.749588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.749705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.749811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.749836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.749938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.750071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.750096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.333 [2024-04-18 17:07:58.750198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.750305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.333 [2024-04-18 17:07:58.750331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.333 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.750467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.750679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.750709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.750814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.750921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.750946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.751073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.751182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.751206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.751317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.751426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.751452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.751557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.751663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.751687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.751812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.751938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.751962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.752067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.752171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.752195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.752306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.752419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.752444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.752582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.752690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.752715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.752852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.752952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.752978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.753084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.753219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.753248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.753354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.753464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.753489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.753592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.753725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.753751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.753855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.753990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.754014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.754174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.754279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.754305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.754440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.754555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.754579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.754714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.754814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.754841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.754981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.755080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.755105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.755207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.755309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.755333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.755438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.755595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.755620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.755726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.755841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.755866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.755978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.756090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.756114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.756222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.756330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.756356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.334 qpair failed and we were unable to recover it. 00:20:43.334 [2024-04-18 17:07:58.756481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.334 [2024-04-18 17:07:58.756592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.756618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.756754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.756856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.756881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.756983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.757098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.757123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.757233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.757370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.757403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.757563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.757668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.757693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.757795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.757954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.757979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.758089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.758223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.758247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.758387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.758491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.758516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.758646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.758778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.758802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.758924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.759035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.759059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.759159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.759260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.759284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.759435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.759551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.759576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.759710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.759845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.759870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.759975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.760109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.760134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.760244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.760353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.760378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.760492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.760600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.760625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.760767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.760868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.760894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.761002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.761129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.761153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.761258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.761368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.761401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.761532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.761635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.761659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.761755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.761860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.761885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.761993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.762124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.762148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.762255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.762389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.762414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.762522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.762630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.762655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.762763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.762895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.762920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.763017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.763116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.763141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.763274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.763369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.763401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.763509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.763616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.763641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.763753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.763890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.763916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.335 qpair failed and we were unable to recover it. 00:20:43.335 [2024-04-18 17:07:58.764029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.335 [2024-04-18 17:07:58.764134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.764158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.764256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.764422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.764448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.764578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.764689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.764715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.764819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.764931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.764955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.765087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.765202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.765227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.765328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.765454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.765480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.765593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.765694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.765719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.765836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.765941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.765966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.766094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.766238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.766262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.766367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.766480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.766505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.766607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.766741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.766765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.766868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.766967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.766992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.767094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.767201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.767226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.767325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.767464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.767490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.767586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.767740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.767764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.767910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.768042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.768067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.768174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.768274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.768298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.768445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.768571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.768596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.768738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.768842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.768867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.768975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.769121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.769146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.769290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.769423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.769448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.769549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.769644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.769669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.769775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.769878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.769903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.770011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.770144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.770169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.770313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.770427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.770453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.770553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.770661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.770686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.770794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.770896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.770920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.771031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.771162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.771187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.336 qpair failed and we were unable to recover it. 00:20:43.336 [2024-04-18 17:07:58.771292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.771422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.336 [2024-04-18 17:07:58.771447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.771577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.771681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.771706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.771814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.771926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.771951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.772054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.772177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.772202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.772331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.772437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.772462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.772568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.772699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.772723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.772825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.772928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.772953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.773053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.773189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.773214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.773327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.773447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.773472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.773575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.773675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.773701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.773833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.773939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.773964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.774079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.774219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.774244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.774374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.774488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.774513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.774618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.774721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.774746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.774851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.774977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.775001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.775104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.775213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.775238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.775362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.775479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.775506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.775608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.775717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.775742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.775872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.775979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.776004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.776111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.776223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.776249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.776356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.776507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.776534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.776672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.776781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.776805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.776917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.777022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.777049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.777188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.777292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.777317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.777449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.777556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.777581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.777686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.777796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.777821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.777961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.778116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.778140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.778240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.778374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.778406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.778513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.778642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.778666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.337 qpair failed and we were unable to recover it. 00:20:43.337 [2024-04-18 17:07:58.778794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.778898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.337 [2024-04-18 17:07:58.778923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.779027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.779132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.779156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.779264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.779373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.779403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.779514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.779615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.779640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.779742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.779851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.779877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.779984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.780085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.780109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.780218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.780324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.780348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.780456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.780556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.780581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.780690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.780790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.780814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.780910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.781048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.781072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.781172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.781275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.781301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.781409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.781515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.781540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.781670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.781783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.781809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.781914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.782023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.782048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.782167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.782295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.782320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.782424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.782525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.782550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.782656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.782772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.782797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.782932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.783036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.783062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.783177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.783289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.783314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.783451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.783559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.783584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.783692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.783812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.783836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.783970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.784100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.784125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.784226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.784377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.784407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.784510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.784610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.784634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.784759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.784868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.784892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.785020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.785119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.785144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.785276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.785400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.785425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.785525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.785628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.785652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.785766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.785871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.785897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.338 qpair failed and we were unable to recover it. 00:20:43.338 [2024-04-18 17:07:58.786043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.786146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.338 [2024-04-18 17:07:58.786171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.786275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.786394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.786419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.786514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.786660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.786684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.786788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.786894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.786923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.787085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.787195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.787219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.787366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.787482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.787506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.787602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.787733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.787758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.787857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.787955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.787979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.788083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.788187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.788211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.788321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.788441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.788467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.788575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.788671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.788695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.788819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.788920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.788945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.789055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.789180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.789205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.789331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.789437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.789467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.789576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.789684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.789708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.789819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.789929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.789956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.790087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.790218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.790244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.790374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.790502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.790528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.790636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.790756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.790780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.790914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.791017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.791041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.791152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.791250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.791274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.791379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.791494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.791519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.791625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.791732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.791756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.791862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.791980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.792009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.339 [2024-04-18 17:07:58.792118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.792252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.339 [2024-04-18 17:07:58.792277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.339 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.792415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.792529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.792553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.792670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.792798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.792823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.792935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.793066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.793090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.793233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.793332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.793357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.793492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.793603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.793629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.793743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.793873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.793897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.794024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.794125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.794149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.794272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.794377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.794407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.794512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.794624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.794652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.794765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.794896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.794921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.795030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.795155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.795180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.795290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.795398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.795424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.795533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.795645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.795669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.795802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.795902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.795926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.796029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.796133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.796158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.796261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.796371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.796404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.796512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.796611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.796635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.796732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.796842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.796866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.797000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.797116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.797140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.797248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.797388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.797414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.797513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.797608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.797632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.797743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.797842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.797866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.797968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.798105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.798129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.798254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.798357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.798412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.798528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.798640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.798664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.798765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.798865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.798890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.799021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.799127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.799152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.799256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.799358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.799389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.340 [2024-04-18 17:07:58.799503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.799610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.340 [2024-04-18 17:07:58.799635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.340 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.799766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.799871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.799895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.799999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.800098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.800122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.800263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.800366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.800398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.800511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.800644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.800669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.800804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.800908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.800933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.801042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.801144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.801169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.801270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.801368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.801401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.801506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.801608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.801634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.801742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.801846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.801871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.801979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.802087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.802114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.802223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.802334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.802358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.802467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.802585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.802609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.802716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.802862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.802885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.802993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.803096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.803119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.803221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.803352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.803377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.803493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.803602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.803627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.803731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.803866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.803890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.804005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.804134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.804158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.804266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.804392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.804417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.804525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.804630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.804655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.804785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.804886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.804911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.805011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.805120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.805144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.805246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.805355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.805379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.805486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.805595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.805621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.805733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.805836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.805860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.806004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.806105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.806130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.806238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.806363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.806404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.806508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.806616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.806641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.341 [2024-04-18 17:07:58.806745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.806846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.341 [2024-04-18 17:07:58.806871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.341 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.807006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.807130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.807154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.807259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.807363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.807395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.807507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.807618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.807643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.807752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.807859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.807883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.807982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.808141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.808166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.808275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.808378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.808409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.808516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.808621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.808645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.808756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.808861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.808885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.808993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.809123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.809147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.809252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.809356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.809387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.809491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.809594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.809619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.809758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.809863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.809886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.809984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.810094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.810117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.810260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.810388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.810413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.810512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.810620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.810645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.810753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.810854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.810878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.810983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.811099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.811123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.811231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.811356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.811388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.811502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.811602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.811627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.811734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.811837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.811861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.811962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.812090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.812114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.812220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.812328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.812352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.812483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.812581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.812605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.812716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.812821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.812846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.812958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.813062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.813087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.813192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.813299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.813324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.813457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.813563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.813587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.813701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.813793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.813817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.342 qpair failed and we were unable to recover it. 00:20:43.342 [2024-04-18 17:07:58.813926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.814036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.342 [2024-04-18 17:07:58.814060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.814165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.814271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.814295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.814405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.814539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.814563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.814679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.814806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.814830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.814937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.815035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.815059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.815156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.815289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.815313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.815427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.815549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.815573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.815692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.815791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.815816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.815927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.816057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.816082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.816218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.816349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.816373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.816489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.816586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.816610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.816720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.816825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.816849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.816950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.817061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.817086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.817205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.817318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.817342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.817464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.817567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.817592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.817694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.817807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.817832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.817946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.818046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.818071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.818175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.818285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.818312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.818418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.818531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.818555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.818688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.818794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.818819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.818921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.819021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.819046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 17:07:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:43.343 [2024-04-18 17:07:58.819155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.819256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.819280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 17:07:58 -- common/autotest_common.sh@850 -- # return 0 00:20:43.343 [2024-04-18 17:07:58.819416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.819523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.819547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.819667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 17:07:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:43.343 [2024-04-18 17:07:58.819783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.819808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.343 qpair failed and we were unable to recover it. 00:20:43.343 [2024-04-18 17:07:58.819914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 [2024-04-18 17:07:58.820011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.343 17:07:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:43.343 [2024-04-18 17:07:58.820037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.820176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.820283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:20:43.344 [2024-04-18 17:07:58.820308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.820428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.820565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.820590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.820698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.820814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.820838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.820939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.821051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.821075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.821218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.821352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.821376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.821496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.821599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.821623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.821730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.821836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.821863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.821964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.822102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.822130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.822289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.822440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.822467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.822574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.822679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.822705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.822818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.822917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.822941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.823054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.823200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.823226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.823338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.823462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.823488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.823599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.823770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.823795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.823898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.824028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.824053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.824166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.824276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.824302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.824415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.824525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.824550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.824659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.824816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.824841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.824955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.825066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.825091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.825222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.825324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.825349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.825470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.825579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.825604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.825720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.825846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.825871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.825979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.826081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.826106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.826246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.826344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.826369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.826497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.826599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.826624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.826729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.826838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.826864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.827025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.827125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.827150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.827253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.827394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.827419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.344 qpair failed and we were unable to recover it. 00:20:43.344 [2024-04-18 17:07:58.827554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.344 [2024-04-18 17:07:58.827669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.827694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.827800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.827904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.827931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.828042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.828151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.828176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.828283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.828411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.828436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.828544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.828652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.828676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.828832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.828934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.828959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.829074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.829198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.829222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.829325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.829432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.829458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.829598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.829699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.829723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.829855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.829960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.829985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.830097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.830233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.830257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.830368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.830500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.830524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.830625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.830746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.830770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.830869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.830972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.830998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.831107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.831215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.831240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.831351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.831461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.831486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.831588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.831704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.831728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.831843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.831974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.831999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.832128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.832238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.832268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.832373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.832498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.832527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.832640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.832746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.832769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.832899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.833011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.833035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.833145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.833281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.833305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.833433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.833537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.833561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.833677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.833784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.833808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.833920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.834074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.834097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.834203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.834305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.834329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.834475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.834585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.834611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.345 qpair failed and we were unable to recover it. 00:20:43.345 [2024-04-18 17:07:58.834715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.345 [2024-04-18 17:07:58.834816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.834840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.834935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.835075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.835099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.835224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.835334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.835360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.835471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.835573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.835598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.835698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.835809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.835834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.835943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.836073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.836098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.836227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.836357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.836387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.836491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.836594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.836619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.836732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.836864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.836888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.837006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.837135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.837160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.837293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.837402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.837429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.837559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.837662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.837686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.837821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.837962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.837987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.838117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.838226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.838250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.838403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.838536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.838561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.838663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.838778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.838803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.838929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.839062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.839087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.839193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.839293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.839318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.839426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.839534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.839560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.839666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.839787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.839811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.839925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.840057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.840082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.840218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.840346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.840370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.840496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.840605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.840632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.840769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.840874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.840898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.841001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.841101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.841127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.841265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.841362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.841396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.841508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.841605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.841630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.841736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.841835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.841860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.841973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.842077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.842101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.346 qpair failed and we were unable to recover it. 00:20:43.346 [2024-04-18 17:07:58.842205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.346 [2024-04-18 17:07:58.842312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.842335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.842447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.842559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.842583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.842724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.842829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.842853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.842971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.843076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.843102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.843237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.843366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.843399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 17:07:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.347 [2024-04-18 17:07:58.843550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.843649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.843674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 17:07:58 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:43.347 [2024-04-18 17:07:58.843824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.843956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.843980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.844087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 17:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.347 [2024-04-18 17:07:58.844216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.844240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:20:43.347 [2024-04-18 17:07:58.844360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.844496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.844521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.844632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.844732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.844755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.844857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.844963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.844988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.845100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.845230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.845254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.845360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.845513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.845537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.845674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.845802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.845826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.845925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.846060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.846084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.846182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.846314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.846338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.846449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.846554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.846579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.846715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.846821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.846846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.846945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.847058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.847082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.847192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.847307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.847331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.847456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.847556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.847581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.847687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.847797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.847821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.847931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.848025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.848053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.848163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.848258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.848282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.848411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.848510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.848534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.848637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.848742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.848767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.848882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.848981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.849005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.849118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.849245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.849268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.347 qpair failed and we were unable to recover it. 00:20:43.347 [2024-04-18 17:07:58.849371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.347 [2024-04-18 17:07:58.849492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.849517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.849645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.849749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.849773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.849906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.850005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.850029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.850132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.850290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.850315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.850415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.850525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.850554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.850671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.850797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.850821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.850957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.851071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.851095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.851204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.851312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.851335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.851467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.851579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.851603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.851758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.851862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.851887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.851983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.852088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.852111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.852223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.852321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.852347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.852458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.852600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.852625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.852722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.852877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.852901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.853008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.853135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.853164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.853265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.853393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.853418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.853561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.853660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.853683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.853824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.853944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.853968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.854074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.854176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.854202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.854316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.854429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.854454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.854564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.854664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.854688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.854797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.854904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.854928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.855037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.855140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.855165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.855266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.855367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.855399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.855541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.855642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.855670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.855839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.855946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.855971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.348 qpair failed and we were unable to recover it. 00:20:43.348 [2024-04-18 17:07:58.856084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.348 [2024-04-18 17:07:58.856188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.856212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.856319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.856427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.856453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.856573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.856688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.856712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.856814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.856947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.856972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.857082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.857185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.857209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.857320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.857452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.857477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.857582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.857687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.857711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.857812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.857961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.857985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.858096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.858224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.858248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.858361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.858537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.858561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.858689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.858816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.858840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.858939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.859062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.859086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.859192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.859293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.859317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.859438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.859574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.859598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.859703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.859800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.859824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.859956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.860074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.860098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.860201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.860297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.860320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.860433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.860536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.860560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.860686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.860813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.860837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.860996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.861138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.861162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.861292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.861400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.861426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.861545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.861653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.861677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.861803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.861960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.861985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.862093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.862250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.862275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.862371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.862524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.862548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.862706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.862810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.862833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.862944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.863049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.863074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.863211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.863339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.863362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.863478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.863610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.863634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.863741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.863893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.863917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.864023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.864136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.864160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.349 [2024-04-18 17:07:58.864293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.864396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.349 [2024-04-18 17:07:58.864421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.349 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.864524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.864625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.864649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.864746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.864871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.864895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.865005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.865130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.865156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.865269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.865374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.865404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.865519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.865616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.865640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.865748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.865873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.865897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.866008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.866105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.866129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.866239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.866378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.866409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.866517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.866645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.866669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.866768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.866879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.866903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.867007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.867110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.867134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.867228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.867335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.867359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.867551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.867736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.867764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.867906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.868061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.868087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.868226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.868331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.868358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b18000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.868514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.868651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.868677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.868782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.868894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.868919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.869055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.869160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.869187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.869282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.869391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.869417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.869524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.869635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.869659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.869790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.869901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.869925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.870053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.870184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.870208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.870313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.870440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.870474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.870589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.870723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.870748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.870860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.870955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.870980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.871078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.871176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.871202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.871305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.871435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.871462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.871576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.871695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.871720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.871857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 Malloc0 00:20:43.350 [2024-04-18 17:07:58.871963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.871988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.872132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.872240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.872264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.350 17:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.350 qpair failed and we were unable to recover it. 00:20:43.350 [2024-04-18 17:07:58.872367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 17:07:58 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:43.350 [2024-04-18 17:07:58.872482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.350 [2024-04-18 17:07:58.872507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 17:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.351 [2024-04-18 17:07:58.872611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:20:43.351 [2024-04-18 17:07:58.872720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.872744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.872850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.872952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.872976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.873076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.873188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.873211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.873316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.873424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.873449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.873546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.873677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.873701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.873805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.873915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.873947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.874053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.874152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.874176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.874290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.874423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.874450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.874561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.874690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.874715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.874821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.874928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.874952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.875058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.875213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.875237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.875342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.875459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.875485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 [2024-04-18 17:07:58.875475] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.875614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.875747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.875771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.875896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.875999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.876022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.876127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.876233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.876258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.876366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.876497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.876522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.876631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.876762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.876786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.876892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.877001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.877027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.877136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.877249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.877274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.877391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.877521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.877546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.877663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.877771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.877795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.877906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.878007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.878031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.878139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.878243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.878267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.878403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.878519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.878544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.878655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.878749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.878773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.878875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.879031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.879056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.879169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.879298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.879322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.879439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.879544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.879569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.879673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.879799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.879823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.879927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.880038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.880062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.351 qpair failed and we were unable to recover it. 00:20:43.351 [2024-04-18 17:07:58.880160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.351 [2024-04-18 17:07:58.880270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.880294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.880391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.880527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.880552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.880690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.880803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.880829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.880930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.881033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.881059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.881161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.881289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.881314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.881425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.881533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.881558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.881670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.881798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.881823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.881927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.882032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.882057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.882162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.882264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.882288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.882414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.882548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.882573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.882674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.882787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.882812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.882911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.883013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.883037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.883137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.883255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.883278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.883393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.883527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.883551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.883660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 17:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.352 [2024-04-18 17:07:58.883783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.883807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 17:07:58 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:43.352 [2024-04-18 17:07:58.883902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 17:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.352 [2024-04-18 17:07:58.884039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.884064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:20:43.352 [2024-04-18 17:07:58.884196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.884299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.884323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.884430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.884531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.884557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.884666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.884794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.884818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.884920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.885052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.885076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.885204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.885309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.885332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.885446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.885585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.885609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.885710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.885841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.885865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.885990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.886128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.886152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.886277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.886378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.886410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.352 [2024-04-18 17:07:58.886560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.886657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.352 [2024-04-18 17:07:58.886681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.352 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.886814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.886920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.886944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.887055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.887148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.887172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.887304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.887438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.887463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.887570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.887675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.887698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.887805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.887905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.887929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.888029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.888140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.888164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.888298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.888408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.888437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.888535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.888634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.888658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.888768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.888866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.888889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.889002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.889128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.889152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.889254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.889347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.889371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.889495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.889599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.889624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.889734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.889861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.889887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.890006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.890110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.890135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.890236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.890392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.890417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.890555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.890659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.890684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.890815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.890920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.890944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.891055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.891194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.891219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.891350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.891506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.891531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.891646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 17:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.353 [2024-04-18 17:07:58.891755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.891779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 17:07:58 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:43.353 [2024-04-18 17:07:58.891876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 17:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.353 [2024-04-18 17:07:58.891992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.892016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:20:43.353 [2024-04-18 17:07:58.892120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.892216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.892240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.892349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.892460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.892485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.892587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.892688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.892713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.892814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.892920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.892944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.893047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.893151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.893175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.893270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.893360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.893391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.893497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.893607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.893630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.893734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.893841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.893866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.893968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.894105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.894129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.894234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.894333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.894356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.353 qpair failed and we were unable to recover it. 00:20:43.353 [2024-04-18 17:07:58.894515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.894617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.353 [2024-04-18 17:07:58.894641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.894777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.894883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.894907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.895010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.895138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.895162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.895264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.895370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.895402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.895507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.895619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.895643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.895751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.895874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.895897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.896008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.896116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.896140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.896270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.896386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.896411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.896516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.896638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.896662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.896764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.896864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.896888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.896997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.897127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.897151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.897248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.897379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.897412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.897522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.897622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.897646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.897753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.897886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.897912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.898022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.898131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.898156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.898270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.898376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.898410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.898542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.898653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.898677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.898777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.898885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.898909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.899036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.899169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.899194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.899295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.899415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.899441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.899540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.899662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.899686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 17:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.354 [2024-04-18 17:07:58.899798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 17:07:58 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.354 [2024-04-18 17:07:58.899933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.899956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 17:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.354 [2024-04-18 17:07:58.900066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:20:43.354 [2024-04-18 17:07:58.900171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.900197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.900304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.900408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.900433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.900553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.900659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.900684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.900818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.900932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.900956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.901083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.901188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.901216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.901318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.901463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.901488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.901596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.901700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.901726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.901853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.901954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.901978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.902109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.902204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.902228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.902329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.902484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.902508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.902611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.902717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.902742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.354 qpair failed and we were unable to recover it. 00:20:43.354 [2024-04-18 17:07:58.902846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.902959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.354 [2024-04-18 17:07:58.902983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 [2024-04-18 17:07:58.903083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.355 [2024-04-18 17:07:58.903188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.355 [2024-04-18 17:07:58.903213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 [2024-04-18 17:07:58.903319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.355 [2024-04-18 17:07:58.903424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.355 [2024-04-18 17:07:58.903450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b28000b90 with addr=10.0.0.2, port=4420 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 [2024-04-18 17:07:58.903552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.355 [2024-04-18 17:07:58.903722] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.355 [2024-04-18 17:07:58.906762] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:20:43.355 [2024-04-18 17:07:58.906842] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f1b28000b90 (107): Transport endpoint is not connected 00:20:43.355 [2024-04-18 17:07:58.906917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 17:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.355 17:07:58 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:43.355 17:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.355 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:20:43.355 17:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.355 17:07:58 -- host/target_disconnect.sh@58 -- # wait 1765513 00:20:43.355 [2024-04-18 17:07:58.916155] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.355 [2024-04-18 17:07:58.916350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.355 [2024-04-18 17:07:58.916379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.355 [2024-04-18 17:07:58.916406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.355 [2024-04-18 17:07:58.916419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.355 [2024-04-18 17:07:58.916450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 [2024-04-18 17:07:58.926114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.355 [2024-04-18 17:07:58.926237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.355 [2024-04-18 17:07:58.926265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.355 [2024-04-18 17:07:58.926280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.355 [2024-04-18 17:07:58.926295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.355 [2024-04-18 17:07:58.926326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 [2024-04-18 17:07:58.936073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.355 [2024-04-18 17:07:58.936184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.355 [2024-04-18 17:07:58.936210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.355 [2024-04-18 17:07:58.936225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.355 [2024-04-18 17:07:58.936237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.355 [2024-04-18 17:07:58.936266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 [2024-04-18 17:07:58.946143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.355 [2024-04-18 17:07:58.946267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.355 [2024-04-18 17:07:58.946292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.355 [2024-04-18 17:07:58.946312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.355 [2024-04-18 17:07:58.946325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.355 [2024-04-18 17:07:58.946354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 [2024-04-18 17:07:58.956151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.355 [2024-04-18 17:07:58.956270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.355 [2024-04-18 17:07:58.956295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.355 [2024-04-18 17:07:58.956310] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.355 [2024-04-18 17:07:58.956323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.355 [2024-04-18 17:07:58.956352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 [2024-04-18 17:07:58.966126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.355 [2024-04-18 17:07:58.966239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.355 [2024-04-18 17:07:58.966265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.355 [2024-04-18 17:07:58.966279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.355 [2024-04-18 17:07:58.966291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.355 [2024-04-18 17:07:58.966320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 [2024-04-18 17:07:58.976126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.355 [2024-04-18 17:07:58.976252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.355 [2024-04-18 17:07:58.976278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.355 [2024-04-18 17:07:58.976293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.355 [2024-04-18 17:07:58.976305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.355 [2024-04-18 17:07:58.976334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 [2024-04-18 17:07:58.986249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.355 [2024-04-18 17:07:58.986358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.355 [2024-04-18 17:07:58.986390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.355 [2024-04-18 17:07:58.986407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.355 [2024-04-18 17:07:58.986420] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.355 [2024-04-18 17:07:58.986449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 [2024-04-18 17:07:58.996227] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.355 [2024-04-18 17:07:58.996336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.355 [2024-04-18 17:07:58.996361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.355 [2024-04-18 17:07:58.996376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.355 [2024-04-18 17:07:58.996397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.355 [2024-04-18 17:07:58.996427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.355 [2024-04-18 17:07:59.006162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.355 [2024-04-18 17:07:59.006301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.355 [2024-04-18 17:07:59.006326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.355 [2024-04-18 17:07:59.006340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.355 [2024-04-18 17:07:59.006353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.355 [2024-04-18 17:07:59.006390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.355 qpair failed and we were unable to recover it. 00:20:43.614 [2024-04-18 17:07:59.016220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.614 [2024-04-18 17:07:59.016343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.614 [2024-04-18 17:07:59.016369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.614 [2024-04-18 17:07:59.016392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.614 [2024-04-18 17:07:59.016406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.614 [2024-04-18 17:07:59.016436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.614 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.026250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.026363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.026398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.026414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.026436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.026465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.036286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.036393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.036425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.036445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.036458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.036487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.046308] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.046416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.046441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.046455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.046468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.046496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.056465] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.056587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.056612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.056626] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.056639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.056668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.066405] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.066530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.066555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.066569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.066582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.066611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.076402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.076514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.076540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.076555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.076567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.076596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.086428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.086532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.086557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.086571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.086584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.086612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.096447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.096560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.096585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.096600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.096612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.096641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.106480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.106587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.106612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.106627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.106639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.106667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.116511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.116622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.116647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.116662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.116675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.116703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.126665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.126773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.126804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.126819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.126832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.126861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.136665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.136772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.136798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.136812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.136824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.136854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.146618] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.146742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.146767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.146782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.146794] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.146822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.156641] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.156755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.156781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.156795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.156807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.156836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.166773] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.166886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.166911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.166925] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.166937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.166972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.176679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.176790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.176816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.176831] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.176843] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.176871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.186740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.186856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.186881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.186896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.186908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.186936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.196736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.196846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.196870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.196884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.196897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.615 [2024-04-18 17:07:59.196925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.615 qpair failed and we were unable to recover it. 00:20:43.615 [2024-04-18 17:07:59.206815] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.615 [2024-04-18 17:07:59.206929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.615 [2024-04-18 17:07:59.206956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.615 [2024-04-18 17:07:59.206971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.615 [2024-04-18 17:07:59.206986] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.616 [2024-04-18 17:07:59.207015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.616 qpair failed and we were unable to recover it. 00:20:43.616 [2024-04-18 17:07:59.216844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.616 [2024-04-18 17:07:59.216987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.616 [2024-04-18 17:07:59.217018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.616 [2024-04-18 17:07:59.217033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.616 [2024-04-18 17:07:59.217045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.616 [2024-04-18 17:07:59.217075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.616 qpair failed and we were unable to recover it. 00:20:43.616 [2024-04-18 17:07:59.226873] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.616 [2024-04-18 17:07:59.226984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.616 [2024-04-18 17:07:59.227010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.616 [2024-04-18 17:07:59.227024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.616 [2024-04-18 17:07:59.227037] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.616 [2024-04-18 17:07:59.227065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.616 qpair failed and we were unable to recover it. 00:20:43.616 [2024-04-18 17:07:59.236900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.616 [2024-04-18 17:07:59.237023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.616 [2024-04-18 17:07:59.237048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.616 [2024-04-18 17:07:59.237062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.616 [2024-04-18 17:07:59.237074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.616 [2024-04-18 17:07:59.237103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.616 qpair failed and we were unable to recover it. 00:20:43.616 [2024-04-18 17:07:59.246877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.616 [2024-04-18 17:07:59.246980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.616 [2024-04-18 17:07:59.247005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.616 [2024-04-18 17:07:59.247020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.616 [2024-04-18 17:07:59.247032] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.616 [2024-04-18 17:07:59.247072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.616 qpair failed and we were unable to recover it. 00:20:43.616 [2024-04-18 17:07:59.256905] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.616 [2024-04-18 17:07:59.257038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.616 [2024-04-18 17:07:59.257062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.616 [2024-04-18 17:07:59.257077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.616 [2024-04-18 17:07:59.257095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.616 [2024-04-18 17:07:59.257124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.616 qpair failed and we were unable to recover it. 00:20:43.616 [2024-04-18 17:07:59.266918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.616 [2024-04-18 17:07:59.267025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.616 [2024-04-18 17:07:59.267051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.616 [2024-04-18 17:07:59.267065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.616 [2024-04-18 17:07:59.267077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.616 [2024-04-18 17:07:59.267106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.616 qpair failed and we were unable to recover it. 00:20:43.616 [2024-04-18 17:07:59.276941] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.616 [2024-04-18 17:07:59.277049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.616 [2024-04-18 17:07:59.277075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.616 [2024-04-18 17:07:59.277090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.616 [2024-04-18 17:07:59.277102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.616 [2024-04-18 17:07:59.277131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.616 qpair failed and we were unable to recover it. 00:20:43.616 [2024-04-18 17:07:59.286962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.616 [2024-04-18 17:07:59.287060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.616 [2024-04-18 17:07:59.287085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.616 [2024-04-18 17:07:59.287100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.616 [2024-04-18 17:07:59.287112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.616 [2024-04-18 17:07:59.287141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.616 qpair failed and we were unable to recover it. 00:20:43.616 [2024-04-18 17:07:59.297025] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.616 [2024-04-18 17:07:59.297135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.616 [2024-04-18 17:07:59.297160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.616 [2024-04-18 17:07:59.297175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.616 [2024-04-18 17:07:59.297187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.616 [2024-04-18 17:07:59.297217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.616 qpair failed and we were unable to recover it. 00:20:43.616 [2024-04-18 17:07:59.307027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.616 [2024-04-18 17:07:59.307189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.616 [2024-04-18 17:07:59.307214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.616 [2024-04-18 17:07:59.307229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.616 [2024-04-18 17:07:59.307241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.616 [2024-04-18 17:07:59.307270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.616 qpair failed and we were unable to recover it. 00:20:43.616 [2024-04-18 17:07:59.317147] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.616 [2024-04-18 17:07:59.317253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.616 [2024-04-18 17:07:59.317277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.616 [2024-04-18 17:07:59.317292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.616 [2024-04-18 17:07:59.317304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.616 [2024-04-18 17:07:59.317332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.616 qpair failed and we were unable to recover it. 00:20:43.876 [2024-04-18 17:07:59.327122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.876 [2024-04-18 17:07:59.327232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.876 [2024-04-18 17:07:59.327258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.876 [2024-04-18 17:07:59.327272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.876 [2024-04-18 17:07:59.327285] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.876 [2024-04-18 17:07:59.327315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.876 qpair failed and we were unable to recover it. 00:20:43.876 [2024-04-18 17:07:59.337119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.876 [2024-04-18 17:07:59.337235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.876 [2024-04-18 17:07:59.337261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.876 [2024-04-18 17:07:59.337275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.876 [2024-04-18 17:07:59.337288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.876 [2024-04-18 17:07:59.337317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.876 qpair failed and we were unable to recover it. 00:20:43.876 [2024-04-18 17:07:59.347142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.876 [2024-04-18 17:07:59.347249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.876 [2024-04-18 17:07:59.347274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.876 [2024-04-18 17:07:59.347288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.876 [2024-04-18 17:07:59.347306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.876 [2024-04-18 17:07:59.347335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.876 qpair failed and we were unable to recover it. 00:20:43.876 [2024-04-18 17:07:59.357181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.876 [2024-04-18 17:07:59.357288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.876 [2024-04-18 17:07:59.357313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.876 [2024-04-18 17:07:59.357327] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.876 [2024-04-18 17:07:59.357339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.876 [2024-04-18 17:07:59.357368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.876 qpair failed and we were unable to recover it. 00:20:43.876 [2024-04-18 17:07:59.367194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.876 [2024-04-18 17:07:59.367299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.876 [2024-04-18 17:07:59.367325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.876 [2024-04-18 17:07:59.367340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.876 [2024-04-18 17:07:59.367352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.876 [2024-04-18 17:07:59.367389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.876 qpair failed and we were unable to recover it. 00:20:43.876 [2024-04-18 17:07:59.377272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.876 [2024-04-18 17:07:59.377433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.876 [2024-04-18 17:07:59.377460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.876 [2024-04-18 17:07:59.377474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.876 [2024-04-18 17:07:59.377487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.876 [2024-04-18 17:07:59.377517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.876 qpair failed and we were unable to recover it. 00:20:43.876 [2024-04-18 17:07:59.387264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.876 [2024-04-18 17:07:59.387397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.876 [2024-04-18 17:07:59.387424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.876 [2024-04-18 17:07:59.387438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.876 [2024-04-18 17:07:59.387451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.876 [2024-04-18 17:07:59.387480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.876 qpair failed and we were unable to recover it. 00:20:43.876 [2024-04-18 17:07:59.397319] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.876 [2024-04-18 17:07:59.397437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.876 [2024-04-18 17:07:59.397462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.876 [2024-04-18 17:07:59.397477] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.876 [2024-04-18 17:07:59.397489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.876 [2024-04-18 17:07:59.397518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.876 qpair failed and we were unable to recover it. 00:20:43.876 [2024-04-18 17:07:59.407309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.876 [2024-04-18 17:07:59.407416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.876 [2024-04-18 17:07:59.407441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.876 [2024-04-18 17:07:59.407456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.876 [2024-04-18 17:07:59.407468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.876 [2024-04-18 17:07:59.407497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.876 qpair failed and we were unable to recover it. 00:20:43.876 [2024-04-18 17:07:59.417415] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.876 [2024-04-18 17:07:59.417531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.876 [2024-04-18 17:07:59.417556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.876 [2024-04-18 17:07:59.417574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.876 [2024-04-18 17:07:59.417587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.876 [2024-04-18 17:07:59.417616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.876 qpair failed and we were unable to recover it. 00:20:43.876 [2024-04-18 17:07:59.427413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.876 [2024-04-18 17:07:59.427539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.876 [2024-04-18 17:07:59.427565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.876 [2024-04-18 17:07:59.427580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.876 [2024-04-18 17:07:59.427594] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.876 [2024-04-18 17:07:59.427624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.876 qpair failed and we were unable to recover it. 00:20:43.876 [2024-04-18 17:07:59.437420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.876 [2024-04-18 17:07:59.437528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.876 [2024-04-18 17:07:59.437553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.876 [2024-04-18 17:07:59.437573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.876 [2024-04-18 17:07:59.437586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.876 [2024-04-18 17:07:59.437616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.447468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.447587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.447612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.447627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.447639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.877 [2024-04-18 17:07:59.447667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.457549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.457685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.457709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.457724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.457736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.877 [2024-04-18 17:07:59.457765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.467518] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.467670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.467696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.467710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.467722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.877 [2024-04-18 17:07:59.467751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.477546] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.477651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.477677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.477692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.477705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.877 [2024-04-18 17:07:59.477733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.487661] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.487764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.487788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.487802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.487815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.877 [2024-04-18 17:07:59.487844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.497613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.497741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.497765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.497780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.497792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.877 [2024-04-18 17:07:59.497821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.507643] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.507776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.507803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.507818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.507830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.877 [2024-04-18 17:07:59.507858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.517700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.517805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.517830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.517844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.517856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.877 [2024-04-18 17:07:59.517885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.527686] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.527797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.527831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.527847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.527859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:43.877 [2024-04-18 17:07:59.527889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.537836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.537949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.537982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.537999] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.538012] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:43.877 [2024-04-18 17:07:59.538043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.547894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.548031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.548060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.548076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.548088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:43.877 [2024-04-18 17:07:59.548120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.557820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.557926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.557953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.557968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.557980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:43.877 [2024-04-18 17:07:59.558010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.567836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.567951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.877 [2024-04-18 17:07:59.567978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.877 [2024-04-18 17:07:59.567993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.877 [2024-04-18 17:07:59.568006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:43.877 [2024-04-18 17:07:59.568042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:43.877 qpair failed and we were unable to recover it. 00:20:43.877 [2024-04-18 17:07:59.577880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.877 [2024-04-18 17:07:59.577993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.878 [2024-04-18 17:07:59.578019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.878 [2024-04-18 17:07:59.578034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.878 [2024-04-18 17:07:59.578047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:43.878 [2024-04-18 17:07:59.578076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:43.878 qpair failed and we were unable to recover it. 00:20:44.138 [2024-04-18 17:07:59.587935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.138 [2024-04-18 17:07:59.588057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.138 [2024-04-18 17:07:59.588084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.138 [2024-04-18 17:07:59.588099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.138 [2024-04-18 17:07:59.588111] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.138 [2024-04-18 17:07:59.588153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.138 qpair failed and we were unable to recover it. 00:20:44.138 [2024-04-18 17:07:59.597907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.138 [2024-04-18 17:07:59.598014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.138 [2024-04-18 17:07:59.598040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.138 [2024-04-18 17:07:59.598055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.138 [2024-04-18 17:07:59.598067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.138 [2024-04-18 17:07:59.598097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.138 qpair failed and we were unable to recover it. 00:20:44.138 [2024-04-18 17:07:59.608026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.138 [2024-04-18 17:07:59.608132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.138 [2024-04-18 17:07:59.608161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.138 [2024-04-18 17:07:59.608176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.138 [2024-04-18 17:07:59.608192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.138 [2024-04-18 17:07:59.608223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.138 qpair failed and we were unable to recover it. 00:20:44.138 [2024-04-18 17:07:59.618001] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.138 [2024-04-18 17:07:59.618121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.138 [2024-04-18 17:07:59.618154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.138 [2024-04-18 17:07:59.618170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.138 [2024-04-18 17:07:59.618182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.138 [2024-04-18 17:07:59.618214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.138 qpair failed and we were unable to recover it. 00:20:44.138 [2024-04-18 17:07:59.628134] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.138 [2024-04-18 17:07:59.628278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.138 [2024-04-18 17:07:59.628304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.138 [2024-04-18 17:07:59.628320] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.138 [2024-04-18 17:07:59.628333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.138 [2024-04-18 17:07:59.628362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.138 qpair failed and we were unable to recover it. 00:20:44.138 [2024-04-18 17:07:59.638091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.138 [2024-04-18 17:07:59.638198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.138 [2024-04-18 17:07:59.638224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.138 [2024-04-18 17:07:59.638239] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.138 [2024-04-18 17:07:59.638251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.138 [2024-04-18 17:07:59.638281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.138 qpair failed and we were unable to recover it. 00:20:44.138 [2024-04-18 17:07:59.648147] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.138 [2024-04-18 17:07:59.648263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.138 [2024-04-18 17:07:59.648289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.138 [2024-04-18 17:07:59.648304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.138 [2024-04-18 17:07:59.648316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.138 [2024-04-18 17:07:59.648346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.138 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.658143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.658268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.658298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.658313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.658331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.658362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.668181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.668320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.668347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.668362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.668374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.668413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.678136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.678239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.678265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.678280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.678293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.678322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.688259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.688370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.688406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.688422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.688434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.688464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.698299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.698442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.698468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.698483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.698495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.698524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.708270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.708393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.708421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.708436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.708448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.708478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.718331] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.718461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.718487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.718502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.718514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.718544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.728329] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.728455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.728482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.728497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.728509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.728539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.738315] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.738433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.738460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.738475] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.738487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.738515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.748353] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.748483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.748509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.748524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.748542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.748571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.758411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.758529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.758555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.758570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.758582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.758612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.768388] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.768491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.768517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.768532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.768545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.768574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.778487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.778599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.778626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.778641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.139 [2024-04-18 17:07:59.778653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.139 [2024-04-18 17:07:59.778682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.139 qpair failed and we were unable to recover it. 00:20:44.139 [2024-04-18 17:07:59.788558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.139 [2024-04-18 17:07:59.788667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.139 [2024-04-18 17:07:59.788693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.139 [2024-04-18 17:07:59.788708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.140 [2024-04-18 17:07:59.788720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.140 [2024-04-18 17:07:59.788749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.140 qpair failed and we were unable to recover it. 00:20:44.140 [2024-04-18 17:07:59.798527] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.140 [2024-04-18 17:07:59.798639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.140 [2024-04-18 17:07:59.798664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.140 [2024-04-18 17:07:59.798679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.140 [2024-04-18 17:07:59.798691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.140 [2024-04-18 17:07:59.798721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.140 qpair failed and we were unable to recover it. 00:20:44.140 [2024-04-18 17:07:59.808534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.140 [2024-04-18 17:07:59.808649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.140 [2024-04-18 17:07:59.808675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.140 [2024-04-18 17:07:59.808689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.140 [2024-04-18 17:07:59.808702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.140 [2024-04-18 17:07:59.808731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.140 qpair failed and we were unable to recover it. 00:20:44.140 [2024-04-18 17:07:59.818583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.140 [2024-04-18 17:07:59.818697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.140 [2024-04-18 17:07:59.818723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.140 [2024-04-18 17:07:59.818738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.140 [2024-04-18 17:07:59.818750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.140 [2024-04-18 17:07:59.818779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.140 qpair failed and we were unable to recover it. 00:20:44.140 [2024-04-18 17:07:59.828616] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.140 [2024-04-18 17:07:59.828724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.140 [2024-04-18 17:07:59.828751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.140 [2024-04-18 17:07:59.828766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.140 [2024-04-18 17:07:59.828779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.140 [2024-04-18 17:07:59.828807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.140 qpair failed and we were unable to recover it. 00:20:44.140 [2024-04-18 17:07:59.838621] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.140 [2024-04-18 17:07:59.838730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.140 [2024-04-18 17:07:59.838756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.140 [2024-04-18 17:07:59.838777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.140 [2024-04-18 17:07:59.838790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.140 [2024-04-18 17:07:59.838819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.140 qpair failed and we were unable to recover it. 00:20:44.400 [2024-04-18 17:07:59.848683] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.400 [2024-04-18 17:07:59.848802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.400 [2024-04-18 17:07:59.848828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.400 [2024-04-18 17:07:59.848843] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.400 [2024-04-18 17:07:59.848856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.400 [2024-04-18 17:07:59.848897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.400 qpair failed and we were unable to recover it. 00:20:44.400 [2024-04-18 17:07:59.858693] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.400 [2024-04-18 17:07:59.858807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.400 [2024-04-18 17:07:59.858833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.400 [2024-04-18 17:07:59.858848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.400 [2024-04-18 17:07:59.858860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.400 [2024-04-18 17:07:59.858901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.400 qpair failed and we were unable to recover it. 00:20:44.400 [2024-04-18 17:07:59.868687] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.400 [2024-04-18 17:07:59.868800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.400 [2024-04-18 17:07:59.868826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.400 [2024-04-18 17:07:59.868841] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.400 [2024-04-18 17:07:59.868854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.868883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.878744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.878851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.878877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.878891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.878904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.878933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.888748] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.888882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.888909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.888924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.888936] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.888968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.898879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.899026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.899051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.899067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.899079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.899108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.908795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.908903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.908929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.908944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.908956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.908986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.918820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.918928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.918954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.918968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.918981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.919010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.928880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.928988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.929021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.929037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.929050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.929096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.938930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.939043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.939070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.939085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.939097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.939127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.948948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.949078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.949104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.949120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.949135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.949164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.958970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.959074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.959100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.959115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.959127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.959156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.968998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.969109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.969135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.969150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.969163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.969197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.979019] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.979127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.979153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.979167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.979180] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.979209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.989036] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.989195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.989221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.989236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.989248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.989277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:07:59.999077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.401 [2024-04-18 17:07:59.999188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.401 [2024-04-18 17:07:59.999214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.401 [2024-04-18 17:07:59.999228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.401 [2024-04-18 17:07:59.999240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.401 [2024-04-18 17:07:59.999270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.401 qpair failed and we were unable to recover it. 00:20:44.401 [2024-04-18 17:08:00.009115] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.402 [2024-04-18 17:08:00.009262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.402 [2024-04-18 17:08:00.009291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.402 [2024-04-18 17:08:00.009307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.402 [2024-04-18 17:08:00.009319] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.402 [2024-04-18 17:08:00.009349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.402 qpair failed and we were unable to recover it. 00:20:44.402 [2024-04-18 17:08:00.019163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.402 [2024-04-18 17:08:00.019283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.402 [2024-04-18 17:08:00.019315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.402 [2024-04-18 17:08:00.019330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.402 [2024-04-18 17:08:00.019343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.402 [2024-04-18 17:08:00.019373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.402 qpair failed and we were unable to recover it. 00:20:44.402 [2024-04-18 17:08:00.029180] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.402 [2024-04-18 17:08:00.029293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.402 [2024-04-18 17:08:00.029320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.402 [2024-04-18 17:08:00.029335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.402 [2024-04-18 17:08:00.029348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.402 [2024-04-18 17:08:00.029378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.402 qpair failed and we were unable to recover it. 00:20:44.402 [2024-04-18 17:08:00.039235] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.402 [2024-04-18 17:08:00.039347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.402 [2024-04-18 17:08:00.039375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.402 [2024-04-18 17:08:00.039399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.402 [2024-04-18 17:08:00.039412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.402 [2024-04-18 17:08:00.039443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.402 qpair failed and we were unable to recover it. 00:20:44.402 [2024-04-18 17:08:00.049298] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.402 [2024-04-18 17:08:00.049412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.402 [2024-04-18 17:08:00.049439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.402 [2024-04-18 17:08:00.049454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.402 [2024-04-18 17:08:00.049467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.402 [2024-04-18 17:08:00.049496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.402 qpair failed and we were unable to recover it. 00:20:44.402 [2024-04-18 17:08:00.059243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.402 [2024-04-18 17:08:00.059360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.402 [2024-04-18 17:08:00.059396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.402 [2024-04-18 17:08:00.059413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.402 [2024-04-18 17:08:00.059426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.402 [2024-04-18 17:08:00.059461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.402 qpair failed and we were unable to recover it. 00:20:44.402 [2024-04-18 17:08:00.069255] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.402 [2024-04-18 17:08:00.069364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.402 [2024-04-18 17:08:00.069398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.402 [2024-04-18 17:08:00.069414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.402 [2024-04-18 17:08:00.069434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.402 [2024-04-18 17:08:00.069463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.402 qpair failed and we were unable to recover it. 00:20:44.402 [2024-04-18 17:08:00.079374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.402 [2024-04-18 17:08:00.079499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.402 [2024-04-18 17:08:00.079525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.402 [2024-04-18 17:08:00.079540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.402 [2024-04-18 17:08:00.079552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.402 [2024-04-18 17:08:00.079581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.402 qpair failed and we were unable to recover it. 00:20:44.402 [2024-04-18 17:08:00.089317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.402 [2024-04-18 17:08:00.089435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.402 [2024-04-18 17:08:00.089462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.402 [2024-04-18 17:08:00.089477] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.402 [2024-04-18 17:08:00.089489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.402 [2024-04-18 17:08:00.089519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.402 qpair failed and we were unable to recover it. 00:20:44.402 [2024-04-18 17:08:00.099365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.402 [2024-04-18 17:08:00.099501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.402 [2024-04-18 17:08:00.099527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.402 [2024-04-18 17:08:00.099542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.402 [2024-04-18 17:08:00.099554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.402 [2024-04-18 17:08:00.099584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.402 qpair failed and we were unable to recover it. 00:20:44.664 [2024-04-18 17:08:00.109389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.664 [2024-04-18 17:08:00.109515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.664 [2024-04-18 17:08:00.109541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.664 [2024-04-18 17:08:00.109556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.664 [2024-04-18 17:08:00.109568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.664 [2024-04-18 17:08:00.109597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.664 qpair failed and we were unable to recover it. 00:20:44.664 [2024-04-18 17:08:00.119501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.664 [2024-04-18 17:08:00.119627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.664 [2024-04-18 17:08:00.119654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.664 [2024-04-18 17:08:00.119669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.664 [2024-04-18 17:08:00.119682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.664 [2024-04-18 17:08:00.119712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.664 qpair failed and we were unable to recover it. 00:20:44.664 [2024-04-18 17:08:00.129455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.664 [2024-04-18 17:08:00.129568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.664 [2024-04-18 17:08:00.129596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.664 [2024-04-18 17:08:00.129611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.664 [2024-04-18 17:08:00.129624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.664 [2024-04-18 17:08:00.129666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.664 qpair failed and we were unable to recover it. 00:20:44.664 [2024-04-18 17:08:00.139522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.664 [2024-04-18 17:08:00.139638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.664 [2024-04-18 17:08:00.139664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.664 [2024-04-18 17:08:00.139679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.664 [2024-04-18 17:08:00.139691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.664 [2024-04-18 17:08:00.139720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.664 qpair failed and we were unable to recover it. 00:20:44.664 [2024-04-18 17:08:00.149542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.664 [2024-04-18 17:08:00.149708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.664 [2024-04-18 17:08:00.149735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.664 [2024-04-18 17:08:00.149750] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.664 [2024-04-18 17:08:00.149768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.664 [2024-04-18 17:08:00.149797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.664 qpair failed and we were unable to recover it. 00:20:44.664 [2024-04-18 17:08:00.159570] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.664 [2024-04-18 17:08:00.159691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.664 [2024-04-18 17:08:00.159719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.664 [2024-04-18 17:08:00.159734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.664 [2024-04-18 17:08:00.159750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.664 [2024-04-18 17:08:00.159781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.664 qpair failed and we were unable to recover it. 00:20:44.664 [2024-04-18 17:08:00.169691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.664 [2024-04-18 17:08:00.169819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.664 [2024-04-18 17:08:00.169845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.664 [2024-04-18 17:08:00.169860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.664 [2024-04-18 17:08:00.169872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.664 [2024-04-18 17:08:00.169902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.664 qpair failed and we were unable to recover it. 00:20:44.664 [2024-04-18 17:08:00.179606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.664 [2024-04-18 17:08:00.179716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.664 [2024-04-18 17:08:00.179743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.664 [2024-04-18 17:08:00.179758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.664 [2024-04-18 17:08:00.179770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.664 [2024-04-18 17:08:00.179799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.664 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.189630] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.189751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.189777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.189792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.189804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.189833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.199671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.199783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.199809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.199824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.199840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.199873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.209666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.209773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.209800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.209815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.209827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.209857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.219701] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.219812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.219838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.219852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.219865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.219894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.229771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.229893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.229918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.229932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.229945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.229974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.239758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.239868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.239894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.239915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.239928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.239957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.249830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.249950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.249976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.249991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.250003] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.250032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.259863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.259982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.260008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.260023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.260035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.260064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.269859] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.269974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.270001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.270016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.270028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.270059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.279930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.280049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.280075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.280090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.280102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.280131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.289936] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.290050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.290076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.290090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.290102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.290132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.299981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.300130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.300155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.300170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.300183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.300223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.310007] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.310150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.310176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.665 [2024-04-18 17:08:00.310191] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.665 [2024-04-18 17:08:00.310203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.665 [2024-04-18 17:08:00.310232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.665 qpair failed and we were unable to recover it. 00:20:44.665 [2024-04-18 17:08:00.320056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.665 [2024-04-18 17:08:00.320169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.665 [2024-04-18 17:08:00.320195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.666 [2024-04-18 17:08:00.320209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.666 [2024-04-18 17:08:00.320222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.666 [2024-04-18 17:08:00.320251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.666 qpair failed and we were unable to recover it. 00:20:44.666 [2024-04-18 17:08:00.330043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.666 [2024-04-18 17:08:00.330163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.666 [2024-04-18 17:08:00.330190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.666 [2024-04-18 17:08:00.330213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.666 [2024-04-18 17:08:00.330227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.666 [2024-04-18 17:08:00.330256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.666 qpair failed and we were unable to recover it. 00:20:44.666 [2024-04-18 17:08:00.340053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.666 [2024-04-18 17:08:00.340165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.666 [2024-04-18 17:08:00.340191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.666 [2024-04-18 17:08:00.340206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.666 [2024-04-18 17:08:00.340218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.666 [2024-04-18 17:08:00.340247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.666 qpair failed and we were unable to recover it. 00:20:44.666 [2024-04-18 17:08:00.350068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.666 [2024-04-18 17:08:00.350181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.666 [2024-04-18 17:08:00.350206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.666 [2024-04-18 17:08:00.350221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.666 [2024-04-18 17:08:00.350233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.666 [2024-04-18 17:08:00.350262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.666 qpair failed and we were unable to recover it. 00:20:44.666 [2024-04-18 17:08:00.360086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.666 [2024-04-18 17:08:00.360188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.666 [2024-04-18 17:08:00.360213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.666 [2024-04-18 17:08:00.360228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.666 [2024-04-18 17:08:00.360240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.666 [2024-04-18 17:08:00.360269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.666 qpair failed and we were unable to recover it. 00:20:44.927 [2024-04-18 17:08:00.370126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.927 [2024-04-18 17:08:00.370239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.927 [2024-04-18 17:08:00.370265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.927 [2024-04-18 17:08:00.370280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.927 [2024-04-18 17:08:00.370293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.927 [2024-04-18 17:08:00.370323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.927 qpair failed and we were unable to recover it. 00:20:44.927 [2024-04-18 17:08:00.380179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.927 [2024-04-18 17:08:00.380290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.927 [2024-04-18 17:08:00.380316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.927 [2024-04-18 17:08:00.380331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.927 [2024-04-18 17:08:00.380343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.927 [2024-04-18 17:08:00.380372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.927 qpair failed and we were unable to recover it. 00:20:44.927 [2024-04-18 17:08:00.390194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.927 [2024-04-18 17:08:00.390309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.927 [2024-04-18 17:08:00.390334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.927 [2024-04-18 17:08:00.390349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.927 [2024-04-18 17:08:00.390361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.927 [2024-04-18 17:08:00.390396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.927 qpair failed and we were unable to recover it. 00:20:44.927 [2024-04-18 17:08:00.400283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.927 [2024-04-18 17:08:00.400408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.927 [2024-04-18 17:08:00.400434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.927 [2024-04-18 17:08:00.400448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.400460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.400489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.410218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.410322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.410348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.410363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.410375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.410412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.420287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.420406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.420438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.420453] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.420466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.420496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.430302] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.430424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.430452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.430468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.430481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.430512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.440320] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.440483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.440510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.440525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.440537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.440568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.450343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.450483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.450512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.450530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.450544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.450574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.460392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.460513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.460539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.460555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.460568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.460604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.470429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.470543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.470570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.470585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.470597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.470639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.480438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.480546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.480573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.480589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.480601] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.480642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.490509] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.490680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.490705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.490719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.490732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.490761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.500509] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.500629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.500657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.500671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.500684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.500724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.510520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.510631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.510663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.510678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.510691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.510721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.520614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.520764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.520790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.520805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.520817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.520846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.928 [2024-04-18 17:08:00.530571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.928 [2024-04-18 17:08:00.530685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.928 [2024-04-18 17:08:00.530711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.928 [2024-04-18 17:08:00.530726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.928 [2024-04-18 17:08:00.530739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.928 [2024-04-18 17:08:00.530768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.928 qpair failed and we were unable to recover it. 00:20:44.929 [2024-04-18 17:08:00.540600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.929 [2024-04-18 17:08:00.540710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.929 [2024-04-18 17:08:00.540736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.929 [2024-04-18 17:08:00.540751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.929 [2024-04-18 17:08:00.540764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.929 [2024-04-18 17:08:00.540794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.929 qpair failed and we were unable to recover it. 00:20:44.929 [2024-04-18 17:08:00.550607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.929 [2024-04-18 17:08:00.550720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.929 [2024-04-18 17:08:00.550746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.929 [2024-04-18 17:08:00.550760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.929 [2024-04-18 17:08:00.550779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.929 [2024-04-18 17:08:00.550809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.929 qpair failed and we were unable to recover it. 00:20:44.929 [2024-04-18 17:08:00.560773] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.929 [2024-04-18 17:08:00.560908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.929 [2024-04-18 17:08:00.560933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.929 [2024-04-18 17:08:00.560948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.929 [2024-04-18 17:08:00.560960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.929 [2024-04-18 17:08:00.560989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.929 qpair failed and we were unable to recover it. 00:20:44.929 [2024-04-18 17:08:00.570671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.929 [2024-04-18 17:08:00.570778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.929 [2024-04-18 17:08:00.570804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.929 [2024-04-18 17:08:00.570818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.929 [2024-04-18 17:08:00.570831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.929 [2024-04-18 17:08:00.570860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.929 qpair failed and we were unable to recover it. 00:20:44.929 [2024-04-18 17:08:00.580715] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.929 [2024-04-18 17:08:00.580872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.929 [2024-04-18 17:08:00.580899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.929 [2024-04-18 17:08:00.580914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.929 [2024-04-18 17:08:00.580926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.929 [2024-04-18 17:08:00.580955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.929 qpair failed and we were unable to recover it. 00:20:44.929 [2024-04-18 17:08:00.590730] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.929 [2024-04-18 17:08:00.590846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.929 [2024-04-18 17:08:00.590872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.929 [2024-04-18 17:08:00.590887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.929 [2024-04-18 17:08:00.590899] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.929 [2024-04-18 17:08:00.590940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.929 qpair failed and we were unable to recover it. 00:20:44.929 [2024-04-18 17:08:00.600736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.929 [2024-04-18 17:08:00.600882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.929 [2024-04-18 17:08:00.600907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.929 [2024-04-18 17:08:00.600922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.929 [2024-04-18 17:08:00.600934] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.929 [2024-04-18 17:08:00.600963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.929 qpair failed and we were unable to recover it. 00:20:44.929 [2024-04-18 17:08:00.610804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.929 [2024-04-18 17:08:00.610938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.929 [2024-04-18 17:08:00.610964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.929 [2024-04-18 17:08:00.610978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.929 [2024-04-18 17:08:00.610991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.929 [2024-04-18 17:08:00.611020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.929 qpair failed and we were unable to recover it. 00:20:44.929 [2024-04-18 17:08:00.620820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.929 [2024-04-18 17:08:00.620928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.929 [2024-04-18 17:08:00.620953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.929 [2024-04-18 17:08:00.620968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.929 [2024-04-18 17:08:00.620980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.929 [2024-04-18 17:08:00.621011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.929 qpair failed and we were unable to recover it. 00:20:44.929 [2024-04-18 17:08:00.630827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.929 [2024-04-18 17:08:00.630937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.929 [2024-04-18 17:08:00.630963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.929 [2024-04-18 17:08:00.630977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.929 [2024-04-18 17:08:00.630990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:44.929 [2024-04-18 17:08:00.631018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:44.929 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.640860] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.640973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.640999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.641020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.641033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.641064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.190 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.650980] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.651096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.651122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.651137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.651149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.651178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.190 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.660927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.661039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.661065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.661080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.661092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.661121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.190 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.670990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.671104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.671129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.671144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.671157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.671186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.190 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.680986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.681087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.681113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.681127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.681140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.681170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.190 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.691053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.691162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.691189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.691203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.691216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.691257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.190 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.701070] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.701185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.701211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.701226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.701238] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.701267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.190 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.711081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.711200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.711226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.711241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.711253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.711282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.190 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.721105] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.721213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.721241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.721256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.721269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.721303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.190 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.731169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.731285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.731312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.731332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.731346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.731376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.190 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.741197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.741354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.741387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.741404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.741417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.741446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.190 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.751173] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.751284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.751310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.751325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.751337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.751366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.190 qpair failed and we were unable to recover it. 00:20:45.190 [2024-04-18 17:08:00.761268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.190 [2024-04-18 17:08:00.761388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.190 [2024-04-18 17:08:00.761414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.190 [2024-04-18 17:08:00.761429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.190 [2024-04-18 17:08:00.761441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.190 [2024-04-18 17:08:00.761482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.771269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.771393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.771419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.771434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.771446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.771476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.781358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.781481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.781508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.781523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.781535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.781565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.791405] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.791534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.791560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.791575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.791588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.791617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.801417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.801519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.801545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.801560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.801572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.801601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.811361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.811495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.811522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.811537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.811549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.811578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.821403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.821512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.821544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.821559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.821572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.821601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.831434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.831550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.831576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.831591] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.831603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.831632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.841556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.841692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.841718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.841733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.841746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.841775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.851458] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.851566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.851592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.851606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.851619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.851647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.861550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.861673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.861699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.861713] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.861725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.861760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.871558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.871675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.871700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.871715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.871727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.871756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.881565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.881668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.881694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.881708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.881721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.881749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.191 [2024-04-18 17:08:00.891609] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.191 [2024-04-18 17:08:00.891759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.191 [2024-04-18 17:08:00.891785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.191 [2024-04-18 17:08:00.891799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.191 [2024-04-18 17:08:00.891811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.191 [2024-04-18 17:08:00.891840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.191 qpair failed and we were unable to recover it. 00:20:45.451 [2024-04-18 17:08:00.901620] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.451 [2024-04-18 17:08:00.901735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.451 [2024-04-18 17:08:00.901761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.451 [2024-04-18 17:08:00.901775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.451 [2024-04-18 17:08:00.901788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.451 [2024-04-18 17:08:00.901817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.451 qpair failed and we were unable to recover it. 00:20:45.451 [2024-04-18 17:08:00.911685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.451 [2024-04-18 17:08:00.911805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.451 [2024-04-18 17:08:00.911835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.451 [2024-04-18 17:08:00.911851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.451 [2024-04-18 17:08:00.911863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.451 [2024-04-18 17:08:00.911892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.451 qpair failed and we were unable to recover it. 00:20:45.451 [2024-04-18 17:08:00.921661] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.451 [2024-04-18 17:08:00.921764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.451 [2024-04-18 17:08:00.921789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.451 [2024-04-18 17:08:00.921804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.451 [2024-04-18 17:08:00.921816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.451 [2024-04-18 17:08:00.921845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.451 qpair failed and we were unable to recover it. 00:20:45.451 [2024-04-18 17:08:00.931687] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.451 [2024-04-18 17:08:00.931787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.451 [2024-04-18 17:08:00.931813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.451 [2024-04-18 17:08:00.931828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.451 [2024-04-18 17:08:00.931840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.451 [2024-04-18 17:08:00.931882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.451 qpair failed and we were unable to recover it. 00:20:45.451 [2024-04-18 17:08:00.941724] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.451 [2024-04-18 17:08:00.941835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.451 [2024-04-18 17:08:00.941860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.451 [2024-04-18 17:08:00.941875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.451 [2024-04-18 17:08:00.941887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.451 [2024-04-18 17:08:00.941916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.451 qpair failed and we were unable to recover it. 00:20:45.451 [2024-04-18 17:08:00.951827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.451 [2024-04-18 17:08:00.951954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.451 [2024-04-18 17:08:00.951979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.451 [2024-04-18 17:08:00.951994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.451 [2024-04-18 17:08:00.952012] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.451 [2024-04-18 17:08:00.952042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.451 qpair failed and we were unable to recover it. 00:20:45.451 [2024-04-18 17:08:00.961890] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.451 [2024-04-18 17:08:00.961999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.451 [2024-04-18 17:08:00.962024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.451 [2024-04-18 17:08:00.962038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.451 [2024-04-18 17:08:00.962051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.451 [2024-04-18 17:08:00.962080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.451 qpair failed and we were unable to recover it. 00:20:45.451 [2024-04-18 17:08:00.971794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.451 [2024-04-18 17:08:00.971906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.451 [2024-04-18 17:08:00.971932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.451 [2024-04-18 17:08:00.971947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.451 [2024-04-18 17:08:00.971959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.451 [2024-04-18 17:08:00.971988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.451 qpair failed and we were unable to recover it. 00:20:45.451 [2024-04-18 17:08:00.981869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.451 [2024-04-18 17:08:00.981983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.451 [2024-04-18 17:08:00.982009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.451 [2024-04-18 17:08:00.982024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.451 [2024-04-18 17:08:00.982036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.451 [2024-04-18 17:08:00.982064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.451 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:00.991942] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:00.992051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:00.992076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:00.992090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:00.992102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:00.992131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.001878] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.001994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.002019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:01.002034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:01.002046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:01.002075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.012001] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.012109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.012136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:01.012150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:01.012163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:01.012191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.021970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.022085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.022111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:01.022126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:01.022138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:01.022166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.032009] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.032119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.032144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:01.032159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:01.032171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:01.032200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.041994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.042101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.042127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:01.042141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:01.042159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:01.042189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.052043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.052158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.052185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:01.052199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:01.052212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:01.052242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.062132] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.062244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.062270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:01.062284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:01.062297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:01.062326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.072075] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.072183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.072209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:01.072224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:01.072237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:01.072266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.082118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.082228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.082254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:01.082269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:01.082281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:01.082322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.092228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.092347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.092372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:01.092396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:01.092410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:01.092439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.102188] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.102300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.102326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:01.102341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:01.102353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:01.102396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.112205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.112361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.112394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.452 [2024-04-18 17:08:01.112411] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.452 [2024-04-18 17:08:01.112424] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.452 [2024-04-18 17:08:01.112453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.452 qpair failed and we were unable to recover it. 00:20:45.452 [2024-04-18 17:08:01.122270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.452 [2024-04-18 17:08:01.122427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.452 [2024-04-18 17:08:01.122452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.453 [2024-04-18 17:08:01.122466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.453 [2024-04-18 17:08:01.122479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.453 [2024-04-18 17:08:01.122520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.453 qpair failed and we were unable to recover it. 00:20:45.453 [2024-04-18 17:08:01.132341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.453 [2024-04-18 17:08:01.132449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.453 [2024-04-18 17:08:01.132475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.453 [2024-04-18 17:08:01.132496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.453 [2024-04-18 17:08:01.132509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.453 [2024-04-18 17:08:01.132539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.453 qpair failed and we were unable to recover it. 00:20:45.453 [2024-04-18 17:08:01.142388] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.453 [2024-04-18 17:08:01.142502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.453 [2024-04-18 17:08:01.142528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.453 [2024-04-18 17:08:01.142543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.453 [2024-04-18 17:08:01.142555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.453 [2024-04-18 17:08:01.142584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.453 qpair failed and we were unable to recover it. 00:20:45.453 [2024-04-18 17:08:01.152315] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.453 [2024-04-18 17:08:01.152442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.453 [2024-04-18 17:08:01.152468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.453 [2024-04-18 17:08:01.152483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.453 [2024-04-18 17:08:01.152495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.453 [2024-04-18 17:08:01.152524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.453 qpair failed and we were unable to recover it. 00:20:45.712 [2024-04-18 17:08:01.162332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.712 [2024-04-18 17:08:01.162486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.712 [2024-04-18 17:08:01.162511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.712 [2024-04-18 17:08:01.162526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.712 [2024-04-18 17:08:01.162539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.712 [2024-04-18 17:08:01.162568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.712 qpair failed and we were unable to recover it. 00:20:45.712 [2024-04-18 17:08:01.172374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.712 [2024-04-18 17:08:01.172488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.712 [2024-04-18 17:08:01.172514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.712 [2024-04-18 17:08:01.172529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.712 [2024-04-18 17:08:01.172541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.712 [2024-04-18 17:08:01.172570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.712 qpair failed and we were unable to recover it. 00:20:45.712 [2024-04-18 17:08:01.182420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.712 [2024-04-18 17:08:01.182547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.712 [2024-04-18 17:08:01.182571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.712 [2024-04-18 17:08:01.182586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.712 [2024-04-18 17:08:01.182598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.712 [2024-04-18 17:08:01.182627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.712 qpair failed and we were unable to recover it. 00:20:45.712 [2024-04-18 17:08:01.192440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.712 [2024-04-18 17:08:01.192550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.712 [2024-04-18 17:08:01.192575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.712 [2024-04-18 17:08:01.192589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.712 [2024-04-18 17:08:01.192602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.712 [2024-04-18 17:08:01.192631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.712 qpair failed and we were unable to recover it. 00:20:45.712 [2024-04-18 17:08:01.202435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.712 [2024-04-18 17:08:01.202544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.712 [2024-04-18 17:08:01.202568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.712 [2024-04-18 17:08:01.202583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.712 [2024-04-18 17:08:01.202595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.712 [2024-04-18 17:08:01.202624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.712 qpair failed and we were unable to recover it. 00:20:45.712 [2024-04-18 17:08:01.212595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.712 [2024-04-18 17:08:01.212717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.712 [2024-04-18 17:08:01.212742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.712 [2024-04-18 17:08:01.212757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.712 [2024-04-18 17:08:01.212769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.712 [2024-04-18 17:08:01.212798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.712 qpair failed and we were unable to recover it. 00:20:45.712 [2024-04-18 17:08:01.222530] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.712 [2024-04-18 17:08:01.222659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.712 [2024-04-18 17:08:01.222690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.712 [2024-04-18 17:08:01.222706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.712 [2024-04-18 17:08:01.222719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.712 [2024-04-18 17:08:01.222748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.712 qpair failed and we were unable to recover it. 00:20:45.712 [2024-04-18 17:08:01.232567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.712 [2024-04-18 17:08:01.232693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.712 [2024-04-18 17:08:01.232721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.712 [2024-04-18 17:08:01.232736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.712 [2024-04-18 17:08:01.232752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.712 [2024-04-18 17:08:01.232784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.712 qpair failed and we were unable to recover it. 00:20:45.712 [2024-04-18 17:08:01.242653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.712 [2024-04-18 17:08:01.242809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.712 [2024-04-18 17:08:01.242835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.712 [2024-04-18 17:08:01.242849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.712 [2024-04-18 17:08:01.242862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.712 [2024-04-18 17:08:01.242891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.712 qpair failed and we were unable to recover it. 00:20:45.712 [2024-04-18 17:08:01.252635] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.712 [2024-04-18 17:08:01.252747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.712 [2024-04-18 17:08:01.252773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.712 [2024-04-18 17:08:01.252788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.712 [2024-04-18 17:08:01.252801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.712 [2024-04-18 17:08:01.252830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.712 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.262672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.262783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.262809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.262823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.262835] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.262870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.272702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.272811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.272836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.272852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.272864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.272893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.282706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.282817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.282843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.282858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.282870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.282902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.292734] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.292854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.292881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.292896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.292908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.292939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.302779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.302936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.302961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.302976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.302988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.303017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.312794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.312907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.312938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.312953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.312966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.312995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.322837] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.322939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.322964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.322979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.322991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.323020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.332838] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.332946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.332971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.332986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.332998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.333028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.342865] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.342977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.343002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.343017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.343029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.343057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.352901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.353008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.353033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.353048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.353065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.353097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.362907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.363015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.363041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.363055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.363068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.363096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.373040] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.373182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.373207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.373222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.373234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.373262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.382979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.383096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.383122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.713 [2024-04-18 17:08:01.383136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.713 [2024-04-18 17:08:01.383149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.713 [2024-04-18 17:08:01.383178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.713 qpair failed and we were unable to recover it. 00:20:45.713 [2024-04-18 17:08:01.393013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.713 [2024-04-18 17:08:01.393118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.713 [2024-04-18 17:08:01.393143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.714 [2024-04-18 17:08:01.393158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.714 [2024-04-18 17:08:01.393170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.714 [2024-04-18 17:08:01.393199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.714 qpair failed and we were unable to recover it. 00:20:45.714 [2024-04-18 17:08:01.403033] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.714 [2024-04-18 17:08:01.403163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.714 [2024-04-18 17:08:01.403188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.714 [2024-04-18 17:08:01.403203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.714 [2024-04-18 17:08:01.403215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.714 [2024-04-18 17:08:01.403244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.714 qpair failed and we were unable to recover it. 00:20:45.714 [2024-04-18 17:08:01.413062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.714 [2024-04-18 17:08:01.413167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.714 [2024-04-18 17:08:01.413193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.714 [2024-04-18 17:08:01.413207] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.714 [2024-04-18 17:08:01.413220] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.714 [2024-04-18 17:08:01.413249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.714 qpair failed and we were unable to recover it. 00:20:45.973 [2024-04-18 17:08:01.423108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.973 [2024-04-18 17:08:01.423252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.973 [2024-04-18 17:08:01.423278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.973 [2024-04-18 17:08:01.423292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.973 [2024-04-18 17:08:01.423305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.973 [2024-04-18 17:08:01.423334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.973 qpair failed and we were unable to recover it. 00:20:45.973 [2024-04-18 17:08:01.433137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.973 [2024-04-18 17:08:01.433251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.973 [2024-04-18 17:08:01.433276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.973 [2024-04-18 17:08:01.433291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.973 [2024-04-18 17:08:01.433303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.973 [2024-04-18 17:08:01.433332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.973 qpair failed and we were unable to recover it. 00:20:45.973 [2024-04-18 17:08:01.443140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.973 [2024-04-18 17:08:01.443278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.973 [2024-04-18 17:08:01.443304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.973 [2024-04-18 17:08:01.443319] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.973 [2024-04-18 17:08:01.443336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.973 [2024-04-18 17:08:01.443366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.973 qpair failed and we were unable to recover it. 00:20:45.973 [2024-04-18 17:08:01.453181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.453297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.453323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.453338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.453350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.453387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.463306] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.463427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.463453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.463467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.463479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.463509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.473231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.473341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.473366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.473389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.473404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.473434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.483263] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.483370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.483403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.483418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.483431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.483459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.493291] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.493405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.493430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.493444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.493457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.493486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.503352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.503473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.503499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.503513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.503525] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.503554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.513376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.513499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.513525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.513540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.513552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.513583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.523387] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.523523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.523548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.523564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.523577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.523605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.533440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.533548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.533574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.533595] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.533608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.533638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.543471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.543614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.543640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.543654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.543667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.543696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.553500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.553617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.553642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.553657] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.553669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.553699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.563507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.563627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.563653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.563668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.563680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.563709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.573577] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.573692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.573720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.573738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.573752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.974 [2024-04-18 17:08:01.573782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.974 qpair failed and we were unable to recover it. 00:20:45.974 [2024-04-18 17:08:01.583596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.974 [2024-04-18 17:08:01.583713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.974 [2024-04-18 17:08:01.583739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.974 [2024-04-18 17:08:01.583754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.974 [2024-04-18 17:08:01.583766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.975 [2024-04-18 17:08:01.583796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.975 qpair failed and we were unable to recover it. 00:20:45.975 [2024-04-18 17:08:01.593621] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.975 [2024-04-18 17:08:01.593734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.975 [2024-04-18 17:08:01.593760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.975 [2024-04-18 17:08:01.593775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.975 [2024-04-18 17:08:01.593788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.975 [2024-04-18 17:08:01.593817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.975 qpair failed and we were unable to recover it. 00:20:45.975 [2024-04-18 17:08:01.603633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.975 [2024-04-18 17:08:01.603742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.975 [2024-04-18 17:08:01.603768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.975 [2024-04-18 17:08:01.603783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.975 [2024-04-18 17:08:01.603796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.975 [2024-04-18 17:08:01.603825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.975 qpair failed and we were unable to recover it. 00:20:45.975 [2024-04-18 17:08:01.613659] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.975 [2024-04-18 17:08:01.613771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.975 [2024-04-18 17:08:01.613797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.975 [2024-04-18 17:08:01.613812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.975 [2024-04-18 17:08:01.613825] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.975 [2024-04-18 17:08:01.613855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.975 qpair failed and we were unable to recover it. 00:20:45.975 [2024-04-18 17:08:01.623670] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.975 [2024-04-18 17:08:01.623783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.975 [2024-04-18 17:08:01.623814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.975 [2024-04-18 17:08:01.623830] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.975 [2024-04-18 17:08:01.623843] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.975 [2024-04-18 17:08:01.623872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.975 qpair failed and we were unable to recover it. 00:20:45.975 [2024-04-18 17:08:01.633722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.975 [2024-04-18 17:08:01.633832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.975 [2024-04-18 17:08:01.633858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.975 [2024-04-18 17:08:01.633875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.975 [2024-04-18 17:08:01.633888] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.975 [2024-04-18 17:08:01.633918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.975 qpair failed and we were unable to recover it. 00:20:45.975 [2024-04-18 17:08:01.643714] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.975 [2024-04-18 17:08:01.643821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.975 [2024-04-18 17:08:01.643846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.975 [2024-04-18 17:08:01.643861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.975 [2024-04-18 17:08:01.643873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.975 [2024-04-18 17:08:01.643902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.975 qpair failed and we were unable to recover it. 00:20:45.975 [2024-04-18 17:08:01.653735] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.975 [2024-04-18 17:08:01.653846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.975 [2024-04-18 17:08:01.653871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.975 [2024-04-18 17:08:01.653886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.975 [2024-04-18 17:08:01.653898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.975 [2024-04-18 17:08:01.653940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.975 qpair failed and we were unable to recover it. 00:20:45.975 [2024-04-18 17:08:01.663772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.975 [2024-04-18 17:08:01.663880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.975 [2024-04-18 17:08:01.663905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.975 [2024-04-18 17:08:01.663920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.975 [2024-04-18 17:08:01.663932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.975 [2024-04-18 17:08:01.663967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.975 qpair failed and we were unable to recover it. 00:20:45.975 [2024-04-18 17:08:01.673830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.975 [2024-04-18 17:08:01.673982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.975 [2024-04-18 17:08:01.674007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.975 [2024-04-18 17:08:01.674022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.975 [2024-04-18 17:08:01.674034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:45.975 [2024-04-18 17:08:01.674064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:45.975 qpair failed and we were unable to recover it. 00:20:46.235 [2024-04-18 17:08:01.683880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.235 [2024-04-18 17:08:01.683990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.235 [2024-04-18 17:08:01.684017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.235 [2024-04-18 17:08:01.684032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.235 [2024-04-18 17:08:01.684045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.235 [2024-04-18 17:08:01.684077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.235 qpair failed and we were unable to recover it. 00:20:46.235 [2024-04-18 17:08:01.693844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.235 [2024-04-18 17:08:01.693995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.235 [2024-04-18 17:08:01.694021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.235 [2024-04-18 17:08:01.694037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.235 [2024-04-18 17:08:01.694050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.235 [2024-04-18 17:08:01.694079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.235 qpair failed and we were unable to recover it. 00:20:46.235 [2024-04-18 17:08:01.704040] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.235 [2024-04-18 17:08:01.704214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.235 [2024-04-18 17:08:01.704240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.235 [2024-04-18 17:08:01.704255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.235 [2024-04-18 17:08:01.704268] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.235 [2024-04-18 17:08:01.704299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.235 qpair failed and we were unable to recover it. 00:20:46.235 [2024-04-18 17:08:01.714050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.235 [2024-04-18 17:08:01.714163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.235 [2024-04-18 17:08:01.714198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.235 [2024-04-18 17:08:01.714214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.235 [2024-04-18 17:08:01.714226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.235 [2024-04-18 17:08:01.714256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.235 qpair failed and we were unable to recover it. 00:20:46.235 [2024-04-18 17:08:01.724075] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.235 [2024-04-18 17:08:01.724180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.235 [2024-04-18 17:08:01.724206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.235 [2024-04-18 17:08:01.724220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.235 [2024-04-18 17:08:01.724233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.235 [2024-04-18 17:08:01.724262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.235 qpair failed and we were unable to recover it. 00:20:46.235 [2024-04-18 17:08:01.734030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.235 [2024-04-18 17:08:01.734142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.235 [2024-04-18 17:08:01.734168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.235 [2024-04-18 17:08:01.734183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.235 [2024-04-18 17:08:01.734195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.235 [2024-04-18 17:08:01.734224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.235 qpair failed and we were unable to recover it. 00:20:46.235 [2024-04-18 17:08:01.744026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.235 [2024-04-18 17:08:01.744138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.235 [2024-04-18 17:08:01.744164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.235 [2024-04-18 17:08:01.744178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.235 [2024-04-18 17:08:01.744190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.235 [2024-04-18 17:08:01.744219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.235 qpair failed and we were unable to recover it. 00:20:46.235 [2024-04-18 17:08:01.754044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.235 [2024-04-18 17:08:01.754168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.235 [2024-04-18 17:08:01.754193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.235 [2024-04-18 17:08:01.754208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.235 [2024-04-18 17:08:01.754220] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.235 [2024-04-18 17:08:01.754254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.235 qpair failed and we were unable to recover it. 00:20:46.235 [2024-04-18 17:08:01.764056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.235 [2024-04-18 17:08:01.764159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.235 [2024-04-18 17:08:01.764185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.235 [2024-04-18 17:08:01.764199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.235 [2024-04-18 17:08:01.764211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.235 [2024-04-18 17:08:01.764240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.235 qpair failed and we were unable to recover it. 00:20:46.235 [2024-04-18 17:08:01.774181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.235 [2024-04-18 17:08:01.774318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.235 [2024-04-18 17:08:01.774343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.235 [2024-04-18 17:08:01.774358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.235 [2024-04-18 17:08:01.774370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.235 [2024-04-18 17:08:01.774408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.235 qpair failed and we were unable to recover it. 00:20:46.235 [2024-04-18 17:08:01.784129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.235 [2024-04-18 17:08:01.784249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.235 [2024-04-18 17:08:01.784274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.235 [2024-04-18 17:08:01.784289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.235 [2024-04-18 17:08:01.784302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.235 [2024-04-18 17:08:01.784331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.235 qpair failed and we were unable to recover it. 00:20:46.235 [2024-04-18 17:08:01.794231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.235 [2024-04-18 17:08:01.794348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.235 [2024-04-18 17:08:01.794373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.235 [2024-04-18 17:08:01.794396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.794410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.794439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.804184] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.804313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.804338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.804353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.804365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.804400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.814210] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.814354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.814387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.814407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.814420] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.814450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.824276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.824433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.824459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.824473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.824485] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.824515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.834260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.834365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.834403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.834419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.834432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.834462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.844363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.844475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.844501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.844516] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.844534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.844564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.854311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.854436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.854463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.854483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.854496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.854526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.864387] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.864507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.864534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.864549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.864561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.864590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.874423] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.874562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.874588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.874603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.874616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.874645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.884453] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.884602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.884630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.884646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.884658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.884689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.894438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.894556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.894583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.894598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.894610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.894639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.904475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.904586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.904612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.904627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.904639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.904681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.914474] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.914587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.914614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.914629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.914641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.914670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.924515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.236 [2024-04-18 17:08:01.924625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.236 [2024-04-18 17:08:01.924650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.236 [2024-04-18 17:08:01.924665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.236 [2024-04-18 17:08:01.924678] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.236 [2024-04-18 17:08:01.924707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.236 qpair failed and we were unable to recover it. 00:20:46.236 [2024-04-18 17:08:01.934564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.237 [2024-04-18 17:08:01.934717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.237 [2024-04-18 17:08:01.934743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.237 [2024-04-18 17:08:01.934764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.237 [2024-04-18 17:08:01.934777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.237 [2024-04-18 17:08:01.934807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.237 qpair failed and we were unable to recover it. 00:20:46.497 [2024-04-18 17:08:01.944609] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:01.944741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:01.944766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:01.944781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:01.944794] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:01.944835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:01.954604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:01.954727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:01.954752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:01.954766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:01.954779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:01.954808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:01.964622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:01.964728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:01.964753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:01.964769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:01.964781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:01.964810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:01.974676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:01.974785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:01.974810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:01.974825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:01.974838] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:01.974867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:01.984685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:01.984799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:01.984825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:01.984840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:01.984852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:01.984880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:01.994706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:01.994813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:01.994839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:01.994854] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:01.994867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:01.994896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:02.004774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:02.004879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:02.004905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:02.004919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:02.004931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:02.004972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:02.014807] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:02.014914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:02.014940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:02.014955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:02.014967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:02.014996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:02.024806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:02.024937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:02.024962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:02.024982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:02.024995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:02.025025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:02.034824] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:02.034930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:02.034956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:02.034971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:02.034983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:02.035012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:02.044935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:02.045043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:02.045069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:02.045084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:02.045096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:02.045125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:02.054880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:02.054995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:02.055021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:02.055036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:02.055048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:02.055078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:02.064941] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:02.065085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:02.065110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:02.065126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.498 [2024-04-18 17:08:02.065138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.498 [2024-04-18 17:08:02.065167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.498 qpair failed and we were unable to recover it. 00:20:46.498 [2024-04-18 17:08:02.074943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.498 [2024-04-18 17:08:02.075045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.498 [2024-04-18 17:08:02.075071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.498 [2024-04-18 17:08:02.075086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.075098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.075127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.499 [2024-04-18 17:08:02.084952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.499 [2024-04-18 17:08:02.085058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.499 [2024-04-18 17:08:02.085084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.499 [2024-04-18 17:08:02.085098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.085111] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.085139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.499 [2024-04-18 17:08:02.094990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.499 [2024-04-18 17:08:02.095092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.499 [2024-04-18 17:08:02.095117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.499 [2024-04-18 17:08:02.095131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.095143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.095173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.499 [2024-04-18 17:08:02.105049] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.499 [2024-04-18 17:08:02.105159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.499 [2024-04-18 17:08:02.105184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.499 [2024-04-18 17:08:02.105198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.105210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.105239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.499 [2024-04-18 17:08:02.115071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.499 [2024-04-18 17:08:02.115218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.499 [2024-04-18 17:08:02.115248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.499 [2024-04-18 17:08:02.115263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.115276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.115305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.499 [2024-04-18 17:08:02.125101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.499 [2024-04-18 17:08:02.125261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.499 [2024-04-18 17:08:02.125287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.499 [2024-04-18 17:08:02.125302] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.125314] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.125343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.499 [2024-04-18 17:08:02.135114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.499 [2024-04-18 17:08:02.135261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.499 [2024-04-18 17:08:02.135287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.499 [2024-04-18 17:08:02.135302] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.135314] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.135343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.499 [2024-04-18 17:08:02.145246] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.499 [2024-04-18 17:08:02.145378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.499 [2024-04-18 17:08:02.145410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.499 [2024-04-18 17:08:02.145425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.145437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.145467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.499 [2024-04-18 17:08:02.155177] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.499 [2024-04-18 17:08:02.155288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.499 [2024-04-18 17:08:02.155314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.499 [2024-04-18 17:08:02.155328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.155341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.155375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.499 [2024-04-18 17:08:02.165185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.499 [2024-04-18 17:08:02.165294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.499 [2024-04-18 17:08:02.165321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.499 [2024-04-18 17:08:02.165336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.165348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.165377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.499 [2024-04-18 17:08:02.175217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.499 [2024-04-18 17:08:02.175328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.499 [2024-04-18 17:08:02.175355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.499 [2024-04-18 17:08:02.175370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.175395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.175429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.499 [2024-04-18 17:08:02.185394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.499 [2024-04-18 17:08:02.185517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.499 [2024-04-18 17:08:02.185543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.499 [2024-04-18 17:08:02.185558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.185570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.185599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.499 [2024-04-18 17:08:02.195289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.499 [2024-04-18 17:08:02.195408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.499 [2024-04-18 17:08:02.195434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.499 [2024-04-18 17:08:02.195449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.499 [2024-04-18 17:08:02.195462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.499 [2024-04-18 17:08:02.195490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.499 qpair failed and we were unable to recover it. 00:20:46.761 [2024-04-18 17:08:02.205321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.761 [2024-04-18 17:08:02.205436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.761 [2024-04-18 17:08:02.205469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.761 [2024-04-18 17:08:02.205485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.761 [2024-04-18 17:08:02.205497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.761 [2024-04-18 17:08:02.205539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.761 qpair failed and we were unable to recover it. 00:20:46.761 [2024-04-18 17:08:02.215438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.761 [2024-04-18 17:08:02.215563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.761 [2024-04-18 17:08:02.215588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.761 [2024-04-18 17:08:02.215603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.761 [2024-04-18 17:08:02.215615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.761 [2024-04-18 17:08:02.215645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.761 qpair failed and we were unable to recover it. 00:20:46.761 [2024-04-18 17:08:02.225373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.761 [2024-04-18 17:08:02.225496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.761 [2024-04-18 17:08:02.225521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.761 [2024-04-18 17:08:02.225536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.761 [2024-04-18 17:08:02.225548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.761 [2024-04-18 17:08:02.225578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.761 qpair failed and we were unable to recover it. 00:20:46.761 [2024-04-18 17:08:02.235400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.761 [2024-04-18 17:08:02.235509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.761 [2024-04-18 17:08:02.235535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.761 [2024-04-18 17:08:02.235549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.761 [2024-04-18 17:08:02.235561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.761 [2024-04-18 17:08:02.235591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.761 qpair failed and we were unable to recover it. 00:20:46.761 [2024-04-18 17:08:02.245418] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.761 [2024-04-18 17:08:02.245521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.761 [2024-04-18 17:08:02.245546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.761 [2024-04-18 17:08:02.245561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.761 [2024-04-18 17:08:02.245579] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.761 [2024-04-18 17:08:02.245608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.761 qpair failed and we were unable to recover it. 00:20:46.761 [2024-04-18 17:08:02.255455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.761 [2024-04-18 17:08:02.255575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.761 [2024-04-18 17:08:02.255601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.761 [2024-04-18 17:08:02.255616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.761 [2024-04-18 17:08:02.255629] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.761 [2024-04-18 17:08:02.255658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.761 qpair failed and we were unable to recover it. 00:20:46.761 [2024-04-18 17:08:02.265487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.761 [2024-04-18 17:08:02.265597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.761 [2024-04-18 17:08:02.265622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.761 [2024-04-18 17:08:02.265637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.761 [2024-04-18 17:08:02.265650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.761 [2024-04-18 17:08:02.265678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.761 qpair failed and we were unable to recover it. 00:20:46.761 [2024-04-18 17:08:02.275515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.761 [2024-04-18 17:08:02.275691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.761 [2024-04-18 17:08:02.275718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.761 [2024-04-18 17:08:02.275733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.761 [2024-04-18 17:08:02.275748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.761 [2024-04-18 17:08:02.275779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.761 qpair failed and we were unable to recover it. 00:20:46.761 [2024-04-18 17:08:02.285522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.761 [2024-04-18 17:08:02.285631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.761 [2024-04-18 17:08:02.285657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.761 [2024-04-18 17:08:02.285671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.761 [2024-04-18 17:08:02.285683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.761 [2024-04-18 17:08:02.285712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.761 qpair failed and we were unable to recover it. 00:20:46.761 [2024-04-18 17:08:02.295562] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.295674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.295701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.295716] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.295728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.295770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.305622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.305735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.305761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.305776] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.305788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.305817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.315683] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.315822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.315848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.315863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.315875] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.315906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.325704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.325827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.325854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.325873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.325886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.325917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.335698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.335825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.335851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.335873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.335886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.335916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.345704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.345816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.345842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.345856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.345868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.345897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.355839] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.355950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.355976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.355991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.356003] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.356032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.365769] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.365900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.365925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.365939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.365952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.365981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.375775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.375891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.375917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.375932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.375944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.375974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.385857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.385969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.385998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.386013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.386025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.386054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.395897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.396038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.396063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.396078] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.396090] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.396119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.405853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.405951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.405977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.405992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.406004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.406032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.415915] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.416022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.416048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.416062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.762 [2024-04-18 17:08:02.416075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.762 [2024-04-18 17:08:02.416103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.762 qpair failed and we were unable to recover it. 00:20:46.762 [2024-04-18 17:08:02.425958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.762 [2024-04-18 17:08:02.426067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.762 [2024-04-18 17:08:02.426093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.762 [2024-04-18 17:08:02.426114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.763 [2024-04-18 17:08:02.426127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.763 [2024-04-18 17:08:02.426156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.763 qpair failed and we were unable to recover it. 00:20:46.763 [2024-04-18 17:08:02.435986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.763 [2024-04-18 17:08:02.436117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.763 [2024-04-18 17:08:02.436142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.763 [2024-04-18 17:08:02.436157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.763 [2024-04-18 17:08:02.436169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.763 [2024-04-18 17:08:02.436199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.763 qpair failed and we were unable to recover it. 00:20:46.763 [2024-04-18 17:08:02.445999] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.763 [2024-04-18 17:08:02.446113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.763 [2024-04-18 17:08:02.446139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.763 [2024-04-18 17:08:02.446154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.763 [2024-04-18 17:08:02.446166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.763 [2024-04-18 17:08:02.446195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.763 qpair failed and we were unable to recover it. 00:20:46.763 [2024-04-18 17:08:02.456012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.763 [2024-04-18 17:08:02.456143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.763 [2024-04-18 17:08:02.456169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.763 [2024-04-18 17:08:02.456183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.763 [2024-04-18 17:08:02.456196] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:46.763 [2024-04-18 17:08:02.456237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:46.763 qpair failed and we were unable to recover it. 00:20:47.022 [2024-04-18 17:08:02.466079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.022 [2024-04-18 17:08:02.466199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.022 [2024-04-18 17:08:02.466227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.022 [2024-04-18 17:08:02.466243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.022 [2024-04-18 17:08:02.466259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.022 [2024-04-18 17:08:02.466290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.022 qpair failed and we were unable to recover it. 00:20:47.022 [2024-04-18 17:08:02.476098] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.022 [2024-04-18 17:08:02.476211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.022 [2024-04-18 17:08:02.476237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.022 [2024-04-18 17:08:02.476252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.022 [2024-04-18 17:08:02.476265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.476296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.486137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.486240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.486267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.486282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.486294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.486324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.496212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.496323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.496348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.496362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.496375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.496412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.506201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.506353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.506379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.506403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.506416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.506445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.516195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.516307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.516338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.516354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.516366] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.516405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.526288] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.526405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.526431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.526445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.526458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.526486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.536232] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.536340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.536366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.536389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.536404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.536433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.546278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.546426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.546452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.546466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.546479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.546508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.556331] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.556447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.556474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.556489] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.556501] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.556540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.566347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.566466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.566493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.566507] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.566520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.566549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.576386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.576501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.576527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.576542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.576555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.576596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.586391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.586512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.586538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.586552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.586565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.586594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.596444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.596551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.596578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.596594] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.596606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.596645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.606554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.606660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.606691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.606707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.606719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.606748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.616484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.616605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.616630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.616645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.616658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.616687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.626511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.626623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.626649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.626664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.626676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.626705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.636555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.636666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.636692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.636707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.636719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.636748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.646590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.646711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.646737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.646751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.646769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.646799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.656588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.656697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.656723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.656738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.656750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.656792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.666721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.666859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.666884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.666899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.666911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.666940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.676678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.676790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.676816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.676831] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.676843] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.676872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.686749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.686854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.686880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.686894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.686907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.686935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.696705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.696834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.696860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.696874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.696887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.696916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.706755] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.706869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.706894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.706908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.706921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.706949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.716786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.716911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.716936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.716951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.023 [2024-04-18 17:08:02.716963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.023 [2024-04-18 17:08:02.716992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.023 qpair failed and we were unable to recover it. 00:20:47.023 [2024-04-18 17:08:02.726888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.023 [2024-04-18 17:08:02.727002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.023 [2024-04-18 17:08:02.727027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.023 [2024-04-18 17:08:02.727042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.024 [2024-04-18 17:08:02.727054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.024 [2024-04-18 17:08:02.727083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.024 qpair failed and we were unable to recover it. 00:20:47.283 [2024-04-18 17:08:02.736905] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.283 [2024-04-18 17:08:02.737017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.283 [2024-04-18 17:08:02.737043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.283 [2024-04-18 17:08:02.737058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.283 [2024-04-18 17:08:02.737076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.283 [2024-04-18 17:08:02.737107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.283 qpair failed and we were unable to recover it. 00:20:47.283 [2024-04-18 17:08:02.746874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.283 [2024-04-18 17:08:02.746986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.283 [2024-04-18 17:08:02.747011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.283 [2024-04-18 17:08:02.747026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.283 [2024-04-18 17:08:02.747039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.283 [2024-04-18 17:08:02.747068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.283 qpair failed and we were unable to recover it. 00:20:47.283 [2024-04-18 17:08:02.756862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.283 [2024-04-18 17:08:02.757021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.283 [2024-04-18 17:08:02.757047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.283 [2024-04-18 17:08:02.757062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.283 [2024-04-18 17:08:02.757074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.283 [2024-04-18 17:08:02.757103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.283 qpair failed and we were unable to recover it. 00:20:47.283 [2024-04-18 17:08:02.766881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.283 [2024-04-18 17:08:02.766987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.283 [2024-04-18 17:08:02.767012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.767026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.767039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.767068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.776925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.777030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.777055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.777070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.777083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.777112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.787021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.787142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.787168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.787183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.787195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.787225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.796996] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.797104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.797130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.797145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.797157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.797186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.807062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.807208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.807234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.807250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.807262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.807291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.817064] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.817185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.817212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.817228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.817244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.817275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.827176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.827327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.827353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.827393] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.827408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.827438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.837119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.837236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.837262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.837277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.837290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.837330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.847159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.847274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.847300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.847314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.847326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.847356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.857163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.857272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.857297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.857312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.857325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.857354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.867229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.867348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.867373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.867397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.867411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.867440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.877230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.877342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.877369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.877392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.877406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.877435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.887263] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.887376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.887410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.887428] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.887443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.887473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.284 [2024-04-18 17:08:02.897303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.284 [2024-04-18 17:08:02.897419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.284 [2024-04-18 17:08:02.897446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.284 [2024-04-18 17:08:02.897463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.284 [2024-04-18 17:08:02.897476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.284 [2024-04-18 17:08:02.897505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.284 qpair failed and we were unable to recover it. 00:20:47.285 [2024-04-18 17:08:02.907426] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.285 [2024-04-18 17:08:02.907541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.285 [2024-04-18 17:08:02.907568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.285 [2024-04-18 17:08:02.907582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.285 [2024-04-18 17:08:02.907595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.285 [2024-04-18 17:08:02.907625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.285 qpair failed and we were unable to recover it. 00:20:47.285 [2024-04-18 17:08:02.917342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.285 [2024-04-18 17:08:02.917463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.285 [2024-04-18 17:08:02.917495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.285 [2024-04-18 17:08:02.917514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.285 [2024-04-18 17:08:02.917527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.285 [2024-04-18 17:08:02.917556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.285 qpair failed and we were unable to recover it. 00:20:47.285 [2024-04-18 17:08:02.927356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.285 [2024-04-18 17:08:02.927479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.285 [2024-04-18 17:08:02.927505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.285 [2024-04-18 17:08:02.927519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.285 [2024-04-18 17:08:02.927532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.285 [2024-04-18 17:08:02.927561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.285 qpair failed and we were unable to recover it. 00:20:47.285 [2024-04-18 17:08:02.937459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.285 [2024-04-18 17:08:02.937574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.285 [2024-04-18 17:08:02.937601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.285 [2024-04-18 17:08:02.937616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.285 [2024-04-18 17:08:02.937628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.285 [2024-04-18 17:08:02.937669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.285 qpair failed and we were unable to recover it. 00:20:47.285 [2024-04-18 17:08:02.947442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.285 [2024-04-18 17:08:02.947560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.285 [2024-04-18 17:08:02.947587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.285 [2024-04-18 17:08:02.947601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.285 [2024-04-18 17:08:02.947613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.285 [2024-04-18 17:08:02.947643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.285 qpair failed and we were unable to recover it. 00:20:47.285 [2024-04-18 17:08:02.957471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.285 [2024-04-18 17:08:02.957580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.285 [2024-04-18 17:08:02.957604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.285 [2024-04-18 17:08:02.957619] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.285 [2024-04-18 17:08:02.957631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.285 [2024-04-18 17:08:02.957666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.285 qpair failed and we were unable to recover it. 00:20:47.285 [2024-04-18 17:08:02.967515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.285 [2024-04-18 17:08:02.967635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.285 [2024-04-18 17:08:02.967661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.285 [2024-04-18 17:08:02.967676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.285 [2024-04-18 17:08:02.967688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.285 [2024-04-18 17:08:02.967718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.285 qpair failed and we were unable to recover it. 00:20:47.285 [2024-04-18 17:08:02.977565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.285 [2024-04-18 17:08:02.977682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.285 [2024-04-18 17:08:02.977708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.285 [2024-04-18 17:08:02.977723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.285 [2024-04-18 17:08:02.977735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.285 [2024-04-18 17:08:02.977765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.285 qpair failed and we were unable to recover it. 00:20:47.285 [2024-04-18 17:08:02.987652] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.285 [2024-04-18 17:08:02.987764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.285 [2024-04-18 17:08:02.987790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.285 [2024-04-18 17:08:02.987806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.285 [2024-04-18 17:08:02.987818] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.285 [2024-04-18 17:08:02.987849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.285 qpair failed and we were unable to recover it. 00:20:47.545 [2024-04-18 17:08:02.997614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.545 [2024-04-18 17:08:02.997724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.545 [2024-04-18 17:08:02.997750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.545 [2024-04-18 17:08:02.997764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.545 [2024-04-18 17:08:02.997777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.545 [2024-04-18 17:08:02.997806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.545 qpair failed and we were unable to recover it. 00:20:47.545 [2024-04-18 17:08:03.007590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.545 [2024-04-18 17:08:03.007698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.545 [2024-04-18 17:08:03.007729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.545 [2024-04-18 17:08:03.007745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.545 [2024-04-18 17:08:03.007758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.545 [2024-04-18 17:08:03.007787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.545 qpair failed and we were unable to recover it. 00:20:47.545 [2024-04-18 17:08:03.017725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.545 [2024-04-18 17:08:03.017831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.545 [2024-04-18 17:08:03.017856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.545 [2024-04-18 17:08:03.017871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.545 [2024-04-18 17:08:03.017884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.545 [2024-04-18 17:08:03.017913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.545 qpair failed and we were unable to recover it. 00:20:47.545 [2024-04-18 17:08:03.027661] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.545 [2024-04-18 17:08:03.027777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.545 [2024-04-18 17:08:03.027802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.545 [2024-04-18 17:08:03.027817] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.545 [2024-04-18 17:08:03.027830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.545 [2024-04-18 17:08:03.027860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.545 qpair failed and we were unable to recover it. 00:20:47.545 [2024-04-18 17:08:03.037695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.545 [2024-04-18 17:08:03.037807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.545 [2024-04-18 17:08:03.037832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.545 [2024-04-18 17:08:03.037847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.545 [2024-04-18 17:08:03.037859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.545 [2024-04-18 17:08:03.037900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.545 qpair failed and we were unable to recover it. 00:20:47.545 [2024-04-18 17:08:03.047711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.545 [2024-04-18 17:08:03.047809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.545 [2024-04-18 17:08:03.047835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.545 [2024-04-18 17:08:03.047849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.545 [2024-04-18 17:08:03.047868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.545 [2024-04-18 17:08:03.047898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.545 qpair failed and we were unable to recover it. 00:20:47.545 [2024-04-18 17:08:03.057737] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.545 [2024-04-18 17:08:03.057845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.545 [2024-04-18 17:08:03.057870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.545 [2024-04-18 17:08:03.057885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.545 [2024-04-18 17:08:03.057897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.545 [2024-04-18 17:08:03.057927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.545 qpair failed and we were unable to recover it. 00:20:47.545 [2024-04-18 17:08:03.067776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.545 [2024-04-18 17:08:03.067885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.545 [2024-04-18 17:08:03.067911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.545 [2024-04-18 17:08:03.067925] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.545 [2024-04-18 17:08:03.067939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.545 [2024-04-18 17:08:03.067980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.545 qpair failed and we were unable to recover it. 00:20:47.545 [2024-04-18 17:08:03.077831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.545 [2024-04-18 17:08:03.077949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.545 [2024-04-18 17:08:03.077983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.545 [2024-04-18 17:08:03.077998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.545 [2024-04-18 17:08:03.078010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.545 [2024-04-18 17:08:03.078039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.545 qpair failed and we were unable to recover it. 00:20:47.545 [2024-04-18 17:08:03.087856] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.545 [2024-04-18 17:08:03.087982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.545 [2024-04-18 17:08:03.088010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.545 [2024-04-18 17:08:03.088025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.545 [2024-04-18 17:08:03.088038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.545 [2024-04-18 17:08:03.088066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.545 qpair failed and we were unable to recover it. 00:20:47.545 [2024-04-18 17:08:03.097844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.545 [2024-04-18 17:08:03.097957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.097983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.097998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.098010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.098039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.107912] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.108032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.108058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.108073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.108093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.108122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.118028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.118139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.118164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.118178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.118191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.118219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.128011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.128117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.128143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.128158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.128170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.128201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.137981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.138087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.138113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.138128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.138146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.138176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.148018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.148133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.148158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.148173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.148185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.148214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.158060] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.158171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.158197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.158212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.158225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.158254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.168120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.168225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.168250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.168265] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.168277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.168307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.178092] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.178213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.178240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.178255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.178267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.178296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.188144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.188307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.188333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.188348] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.188360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.188398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.198232] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.198340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.198366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.198388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.198403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.198432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.208190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.208298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.208327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.208342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.208354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.208390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.218257] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.218369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.218402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.218421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.218434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.218463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.546 qpair failed and we were unable to recover it. 00:20:47.546 [2024-04-18 17:08:03.228253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.546 [2024-04-18 17:08:03.228361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.546 [2024-04-18 17:08:03.228393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.546 [2024-04-18 17:08:03.228417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.546 [2024-04-18 17:08:03.228431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.546 [2024-04-18 17:08:03.228461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.547 qpair failed and we were unable to recover it. 00:20:47.547 [2024-04-18 17:08:03.238266] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.547 [2024-04-18 17:08:03.238371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.547 [2024-04-18 17:08:03.238407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.547 [2024-04-18 17:08:03.238422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.547 [2024-04-18 17:08:03.238435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.547 [2024-04-18 17:08:03.238464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.547 qpair failed and we were unable to recover it. 00:20:47.547 [2024-04-18 17:08:03.248297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.547 [2024-04-18 17:08:03.248440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.547 [2024-04-18 17:08:03.248475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.547 [2024-04-18 17:08:03.248491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.547 [2024-04-18 17:08:03.248504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.547 [2024-04-18 17:08:03.248546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.547 qpair failed and we were unable to recover it. 00:20:47.807 [2024-04-18 17:08:03.258342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.807 [2024-04-18 17:08:03.258469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.807 [2024-04-18 17:08:03.258495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.807 [2024-04-18 17:08:03.258510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.807 [2024-04-18 17:08:03.258523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.807 [2024-04-18 17:08:03.258552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.807 qpair failed and we were unable to recover it. 00:20:47.807 [2024-04-18 17:08:03.268442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.807 [2024-04-18 17:08:03.268556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.807 [2024-04-18 17:08:03.268581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.807 [2024-04-18 17:08:03.268596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.807 [2024-04-18 17:08:03.268609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.807 [2024-04-18 17:08:03.268638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.807 qpair failed and we were unable to recover it. 00:20:47.807 [2024-04-18 17:08:03.278463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.807 [2024-04-18 17:08:03.278600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.807 [2024-04-18 17:08:03.278626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.807 [2024-04-18 17:08:03.278641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.807 [2024-04-18 17:08:03.278653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.807 [2024-04-18 17:08:03.278682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.807 qpair failed and we were unable to recover it. 00:20:47.807 [2024-04-18 17:08:03.288470] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.807 [2024-04-18 17:08:03.288581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.807 [2024-04-18 17:08:03.288607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.807 [2024-04-18 17:08:03.288621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.807 [2024-04-18 17:08:03.288633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.807 [2024-04-18 17:08:03.288662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.807 qpair failed and we were unable to recover it. 00:20:47.807 [2024-04-18 17:08:03.298500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.807 [2024-04-18 17:08:03.298608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.298632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.298647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.298659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.298688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.308543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.308656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.308682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.308697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.308709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.308739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.318510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.318636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.318666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.318683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.318695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.318736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.328515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.328650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.328677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.328692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.328704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.328733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.338537] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.338647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.338673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.338688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.338700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.338730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.348592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.348709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.348734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.348749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.348761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.348790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.358634] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.358740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.358765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.358780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.358792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.358827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.368633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.368743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.368769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.368785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.368798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.368827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.378669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.378781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.378807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.378822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.378834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.378863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.388720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.388832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.388857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.388871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.388883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.388912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.398706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.398822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.398848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.398863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.398875] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.398903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.408740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.408840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.408871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.408886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.408899] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.408940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.418817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.418942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.418967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.418982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.418994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.419023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.808 [2024-04-18 17:08:03.428801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.808 [2024-04-18 17:08:03.428913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.808 [2024-04-18 17:08:03.428939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.808 [2024-04-18 17:08:03.428953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.808 [2024-04-18 17:08:03.428966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.808 [2024-04-18 17:08:03.428995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.808 qpair failed and we were unable to recover it. 00:20:47.809 [2024-04-18 17:08:03.438827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.809 [2024-04-18 17:08:03.438930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.809 [2024-04-18 17:08:03.438956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.809 [2024-04-18 17:08:03.438971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.809 [2024-04-18 17:08:03.438983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.809 [2024-04-18 17:08:03.439012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.809 qpair failed and we were unable to recover it. 00:20:47.809 [2024-04-18 17:08:03.448842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.809 [2024-04-18 17:08:03.448945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.809 [2024-04-18 17:08:03.448971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.809 [2024-04-18 17:08:03.448986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.809 [2024-04-18 17:08:03.448998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.809 [2024-04-18 17:08:03.449033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.809 qpair failed and we were unable to recover it. 00:20:47.809 [2024-04-18 17:08:03.458912] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.809 [2024-04-18 17:08:03.459032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.809 [2024-04-18 17:08:03.459058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.809 [2024-04-18 17:08:03.459072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.809 [2024-04-18 17:08:03.459085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.809 [2024-04-18 17:08:03.459114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.809 qpair failed and we were unable to recover it. 00:20:47.809 [2024-04-18 17:08:03.469000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.809 [2024-04-18 17:08:03.469110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.809 [2024-04-18 17:08:03.469136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.809 [2024-04-18 17:08:03.469150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.809 [2024-04-18 17:08:03.469162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.809 [2024-04-18 17:08:03.469191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.809 qpair failed and we were unable to recover it. 00:20:47.809 [2024-04-18 17:08:03.479031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.809 [2024-04-18 17:08:03.479139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.809 [2024-04-18 17:08:03.479164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.809 [2024-04-18 17:08:03.479178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.809 [2024-04-18 17:08:03.479191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.809 [2024-04-18 17:08:03.479219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.809 qpair failed and we were unable to recover it. 00:20:47.809 [2024-04-18 17:08:03.488954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.809 [2024-04-18 17:08:03.489054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.809 [2024-04-18 17:08:03.489079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.809 [2024-04-18 17:08:03.489094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.809 [2024-04-18 17:08:03.489106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.809 [2024-04-18 17:08:03.489134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.809 qpair failed and we were unable to recover it. 00:20:47.809 [2024-04-18 17:08:03.499077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.809 [2024-04-18 17:08:03.499184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.809 [2024-04-18 17:08:03.499208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.809 [2024-04-18 17:08:03.499222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.809 [2024-04-18 17:08:03.499235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.809 [2024-04-18 17:08:03.499264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.809 qpair failed and we were unable to recover it. 00:20:47.809 [2024-04-18 17:08:03.509020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.809 [2024-04-18 17:08:03.509130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.809 [2024-04-18 17:08:03.509155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.809 [2024-04-18 17:08:03.509170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.809 [2024-04-18 17:08:03.509183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:47.809 [2024-04-18 17:08:03.509211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:47.809 qpair failed and we were unable to recover it. 00:20:48.070 [2024-04-18 17:08:03.519043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.070 [2024-04-18 17:08:03.519192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.519218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.519233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.519245] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.519274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.529066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.529170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.529195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.529210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.529222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.529251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.539120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.539250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.539275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.539290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.539309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.539338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.549231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.549348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.549377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.549404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.549418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.549448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.559160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.559299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.559326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.559341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.559357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.559396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.569215] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.569325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.569351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.569366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.569379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.569421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.579224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.579346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.579373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.579400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.579414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.579444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.589230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.589347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.589373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.589397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.589410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.589441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.599350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.599463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.599488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.599503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.599515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.599545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.609302] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.609416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.609442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.609457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.609469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.609498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.619332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.619446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.619472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.619487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.619500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.619528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.629350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.629467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.629492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.629513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.629526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.629555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.639371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.639484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.639510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.639526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.639538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.639567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.649474] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.649599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.649624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.649642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.649654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.649684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.659439] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.659545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.659571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.659586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.659598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.659627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.669571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.669684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.669709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.669724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.669736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.669764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.679498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.679609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.679635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.679649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.679662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.679702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.689506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.689612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.689638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.689652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.689665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.689694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.699556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.699669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.699695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.699710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.699722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.699751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.709603] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.709718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.709747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.709762] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.709774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.709803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.719661] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.719772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.719802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.719819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.719831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.719860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.729680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.729794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.729820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.729835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.729847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.729888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.739674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.739778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.739804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.739819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.739831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.739860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.749790] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.749897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.749923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.749938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.749950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.749979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.759735] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.759894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.759920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.759935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.759947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.759987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.071 [2024-04-18 17:08:03.769768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.071 [2024-04-18 17:08:03.769879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.071 [2024-04-18 17:08:03.769905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.071 [2024-04-18 17:08:03.769920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.071 [2024-04-18 17:08:03.769932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.071 [2024-04-18 17:08:03.769973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.071 qpair failed and we were unable to recover it. 00:20:48.333 [2024-04-18 17:08:03.779782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.333 [2024-04-18 17:08:03.779895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.333 [2024-04-18 17:08:03.779922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.333 [2024-04-18 17:08:03.779936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.333 [2024-04-18 17:08:03.779949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.333 [2024-04-18 17:08:03.779978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.333 qpair failed and we were unable to recover it. 00:20:48.333 [2024-04-18 17:08:03.789847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.333 [2024-04-18 17:08:03.789962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.333 [2024-04-18 17:08:03.789987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.333 [2024-04-18 17:08:03.790002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.333 [2024-04-18 17:08:03.790014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.333 [2024-04-18 17:08:03.790043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.333 qpair failed and we were unable to recover it. 00:20:48.333 [2024-04-18 17:08:03.799937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.333 [2024-04-18 17:08:03.800049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.333 [2024-04-18 17:08:03.800074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.333 [2024-04-18 17:08:03.800089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.333 [2024-04-18 17:08:03.800101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.333 [2024-04-18 17:08:03.800130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.333 qpair failed and we were unable to recover it. 00:20:48.333 [2024-04-18 17:08:03.809898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.333 [2024-04-18 17:08:03.810005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.333 [2024-04-18 17:08:03.810036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.333 [2024-04-18 17:08:03.810051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.333 [2024-04-18 17:08:03.810063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.333 [2024-04-18 17:08:03.810092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.333 qpair failed and we were unable to recover it. 00:20:48.333 [2024-04-18 17:08:03.819987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.333 [2024-04-18 17:08:03.820093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.333 [2024-04-18 17:08:03.820118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.333 [2024-04-18 17:08:03.820133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.333 [2024-04-18 17:08:03.820145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.333 [2024-04-18 17:08:03.820174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.333 qpair failed and we were unable to recover it. 00:20:48.333 [2024-04-18 17:08:03.829965] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.333 [2024-04-18 17:08:03.830074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.333 [2024-04-18 17:08:03.830100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.333 [2024-04-18 17:08:03.830115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.333 [2024-04-18 17:08:03.830127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.333 [2024-04-18 17:08:03.830156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.333 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.839979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.840091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.840116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.840131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.334 [2024-04-18 17:08:03.840143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.334 [2024-04-18 17:08:03.840172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.334 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.850050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.850205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.850231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.850247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.334 [2024-04-18 17:08:03.850262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.334 [2024-04-18 17:08:03.850308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.334 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.860031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.860132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.860158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.860172] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.334 [2024-04-18 17:08:03.860185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.334 [2024-04-18 17:08:03.860214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.334 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.870127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.870271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.870297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.870311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.334 [2024-04-18 17:08:03.870324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.334 [2024-04-18 17:08:03.870353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.334 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.880183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.880291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.880316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.880331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.334 [2024-04-18 17:08:03.880343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.334 [2024-04-18 17:08:03.880372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.334 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.890138] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.890286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.890311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.890326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.334 [2024-04-18 17:08:03.890338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.334 [2024-04-18 17:08:03.890367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.334 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.900179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.900326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.900357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.900372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.334 [2024-04-18 17:08:03.900393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.334 [2024-04-18 17:08:03.900423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.334 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.910179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.910337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.910362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.910377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.334 [2024-04-18 17:08:03.910401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.334 [2024-04-18 17:08:03.910432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.334 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.920194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.920299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.920324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.920338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.334 [2024-04-18 17:08:03.920350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.334 [2024-04-18 17:08:03.920379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.334 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.930251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.930367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.930399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.930415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.334 [2024-04-18 17:08:03.930427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.334 [2024-04-18 17:08:03.930456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.334 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.940358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.940497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.940522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.940537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.334 [2024-04-18 17:08:03.940558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.334 [2024-04-18 17:08:03.940588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.334 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.950330] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.950449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.950474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.950489] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.334 [2024-04-18 17:08:03.950502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.334 [2024-04-18 17:08:03.950531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.334 qpair failed and we were unable to recover it. 00:20:48.334 [2024-04-18 17:08:03.960334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.334 [2024-04-18 17:08:03.960495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.334 [2024-04-18 17:08:03.960523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.334 [2024-04-18 17:08:03.960539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.335 [2024-04-18 17:08:03.960554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.335 [2024-04-18 17:08:03.960585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.335 qpair failed and we were unable to recover it. 00:20:48.335 [2024-04-18 17:08:03.970366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.335 [2024-04-18 17:08:03.970478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.335 [2024-04-18 17:08:03.970504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.335 [2024-04-18 17:08:03.970519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.335 [2024-04-18 17:08:03.970532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.335 [2024-04-18 17:08:03.970561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.335 qpair failed and we were unable to recover it. 00:20:48.335 [2024-04-18 17:08:03.980428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.335 [2024-04-18 17:08:03.980572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.335 [2024-04-18 17:08:03.980599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.335 [2024-04-18 17:08:03.980614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.335 [2024-04-18 17:08:03.980627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.335 [2024-04-18 17:08:03.980656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.335 qpair failed and we were unable to recover it. 00:20:48.335 [2024-04-18 17:08:03.990489] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.335 [2024-04-18 17:08:03.990606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.335 [2024-04-18 17:08:03.990631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.335 [2024-04-18 17:08:03.990646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.335 [2024-04-18 17:08:03.990658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.335 [2024-04-18 17:08:03.990687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.335 qpair failed and we were unable to recover it. 00:20:48.335 [2024-04-18 17:08:04.000507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.335 [2024-04-18 17:08:04.000616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.335 [2024-04-18 17:08:04.000642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.335 [2024-04-18 17:08:04.000656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.335 [2024-04-18 17:08:04.000668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.335 [2024-04-18 17:08:04.000698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.335 qpair failed and we were unable to recover it. 00:20:48.335 [2024-04-18 17:08:04.010463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.335 [2024-04-18 17:08:04.010581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.335 [2024-04-18 17:08:04.010606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.335 [2024-04-18 17:08:04.010620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.335 [2024-04-18 17:08:04.010633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.335 [2024-04-18 17:08:04.010662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.335 qpair failed and we were unable to recover it. 00:20:48.335 [2024-04-18 17:08:04.020485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.335 [2024-04-18 17:08:04.020640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.335 [2024-04-18 17:08:04.020666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.335 [2024-04-18 17:08:04.020681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.335 [2024-04-18 17:08:04.020694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.335 [2024-04-18 17:08:04.020723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.335 qpair failed and we were unable to recover it. 00:20:48.335 [2024-04-18 17:08:04.030510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.335 [2024-04-18 17:08:04.030623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.335 [2024-04-18 17:08:04.030648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.335 [2024-04-18 17:08:04.030669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.335 [2024-04-18 17:08:04.030682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.335 [2024-04-18 17:08:04.030711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.335 qpair failed and we were unable to recover it. 00:20:48.597 [2024-04-18 17:08:04.040600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.597 [2024-04-18 17:08:04.040735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.597 [2024-04-18 17:08:04.040760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.597 [2024-04-18 17:08:04.040775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.597 [2024-04-18 17:08:04.040787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.597 [2024-04-18 17:08:04.040816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.597 qpair failed and we were unable to recover it. 00:20:48.597 [2024-04-18 17:08:04.050673] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.597 [2024-04-18 17:08:04.050779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.597 [2024-04-18 17:08:04.050805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.597 [2024-04-18 17:08:04.050820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.597 [2024-04-18 17:08:04.050832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.597 [2024-04-18 17:08:04.050862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.597 qpair failed and we were unable to recover it. 00:20:48.597 [2024-04-18 17:08:04.060594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.597 [2024-04-18 17:08:04.060751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.597 [2024-04-18 17:08:04.060776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.597 [2024-04-18 17:08:04.060791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.597 [2024-04-18 17:08:04.060804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.597 [2024-04-18 17:08:04.060832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.597 qpair failed and we were unable to recover it. 00:20:48.597 [2024-04-18 17:08:04.070776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.597 [2024-04-18 17:08:04.070902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.597 [2024-04-18 17:08:04.070928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.597 [2024-04-18 17:08:04.070944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.597 [2024-04-18 17:08:04.070956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.597 [2024-04-18 17:08:04.070987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.597 qpair failed and we were unable to recover it. 00:20:48.597 [2024-04-18 17:08:04.080809] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.597 [2024-04-18 17:08:04.080946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.597 [2024-04-18 17:08:04.080973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.597 [2024-04-18 17:08:04.080991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.597 [2024-04-18 17:08:04.081005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.597 [2024-04-18 17:08:04.081035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.597 qpair failed and we were unable to recover it. 00:20:48.597 [2024-04-18 17:08:04.090699] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.597 [2024-04-18 17:08:04.090826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.597 [2024-04-18 17:08:04.090852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.597 [2024-04-18 17:08:04.090867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.597 [2024-04-18 17:08:04.090879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.597 [2024-04-18 17:08:04.090908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.597 qpair failed and we were unable to recover it. 00:20:48.597 [2024-04-18 17:08:04.100817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.597 [2024-04-18 17:08:04.100926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.597 [2024-04-18 17:08:04.100953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.597 [2024-04-18 17:08:04.100968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.597 [2024-04-18 17:08:04.100980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.597 [2024-04-18 17:08:04.101009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.597 qpair failed and we were unable to recover it. 00:20:48.597 [2024-04-18 17:08:04.110857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.597 [2024-04-18 17:08:04.110990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.597 [2024-04-18 17:08:04.111015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.597 [2024-04-18 17:08:04.111030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.597 [2024-04-18 17:08:04.111042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.597 [2024-04-18 17:08:04.111072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.597 qpair failed and we were unable to recover it. 00:20:48.597 [2024-04-18 17:08:04.120782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.597 [2024-04-18 17:08:04.120885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.597 [2024-04-18 17:08:04.120911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.597 [2024-04-18 17:08:04.120932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.597 [2024-04-18 17:08:04.120946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.597 [2024-04-18 17:08:04.120987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.597 qpair failed and we were unable to recover it. 00:20:48.597 [2024-04-18 17:08:04.130856] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.597 [2024-04-18 17:08:04.130966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.597 [2024-04-18 17:08:04.130993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.597 [2024-04-18 17:08:04.131007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.597 [2024-04-18 17:08:04.131020] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.597 [2024-04-18 17:08:04.131048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.597 qpair failed and we were unable to recover it. 00:20:48.597 [2024-04-18 17:08:04.140884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.597 [2024-04-18 17:08:04.141045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.597 [2024-04-18 17:08:04.141070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.597 [2024-04-18 17:08:04.141085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.597 [2024-04-18 17:08:04.141097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.597 [2024-04-18 17:08:04.141126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.597 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.150976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.151112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.151137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.151152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.151164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.151193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.160892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.161017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.161042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.161057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.161069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.161098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.171015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.171137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.171163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.171178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.171191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.171222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.181000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.181110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.181136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.181152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.181164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.181193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.191034] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.191177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.191203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.191218] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.191230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.191259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.201054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.201212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.201238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.201253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.201265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.201294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.211068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.211233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.211265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.211281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.211294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.211336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.221118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.221231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.221258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.221273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.221285] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.221315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.231157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.231268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.231294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.231309] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.231321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.231350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.241227] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.241331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.241357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.241372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.241392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.241423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.251192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.251351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.251377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.251401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.251415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.251457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.261179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.261288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.261314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.261329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.261341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.261370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.271233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.271368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.271400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.598 [2024-04-18 17:08:04.271416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.598 [2024-04-18 17:08:04.271428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.598 [2024-04-18 17:08:04.271469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.598 qpair failed and we were unable to recover it. 00:20:48.598 [2024-04-18 17:08:04.281268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.598 [2024-04-18 17:08:04.281388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.598 [2024-04-18 17:08:04.281414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.599 [2024-04-18 17:08:04.281429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.599 [2024-04-18 17:08:04.281442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.599 [2024-04-18 17:08:04.281471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.599 qpair failed and we were unable to recover it. 00:20:48.599 [2024-04-18 17:08:04.291323] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.599 [2024-04-18 17:08:04.291443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.599 [2024-04-18 17:08:04.291469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.599 [2024-04-18 17:08:04.291486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.599 [2024-04-18 17:08:04.291499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.599 [2024-04-18 17:08:04.291528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.599 qpair failed and we were unable to recover it. 00:20:48.599 [2024-04-18 17:08:04.301329] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.599 [2024-04-18 17:08:04.301445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.599 [2024-04-18 17:08:04.301477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.599 [2024-04-18 17:08:04.301493] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.599 [2024-04-18 17:08:04.301505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.599 [2024-04-18 17:08:04.301534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.599 qpair failed and we were unable to recover it. 00:20:48.859 [2024-04-18 17:08:04.311352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.859 [2024-04-18 17:08:04.311507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.859 [2024-04-18 17:08:04.311533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.859 [2024-04-18 17:08:04.311549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.859 [2024-04-18 17:08:04.311561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.859 [2024-04-18 17:08:04.311602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.859 qpair failed and we were unable to recover it. 00:20:48.859 [2024-04-18 17:08:04.321442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.859 [2024-04-18 17:08:04.321553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.859 [2024-04-18 17:08:04.321578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.859 [2024-04-18 17:08:04.321593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.859 [2024-04-18 17:08:04.321605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.859 [2024-04-18 17:08:04.321634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.859 qpair failed and we were unable to recover it. 00:20:48.859 [2024-04-18 17:08:04.331434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.859 [2024-04-18 17:08:04.331550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.859 [2024-04-18 17:08:04.331575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.859 [2024-04-18 17:08:04.331590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.859 [2024-04-18 17:08:04.331602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.859 [2024-04-18 17:08:04.331631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.859 qpair failed and we were unable to recover it. 00:20:48.859 [2024-04-18 17:08:04.341466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.859 [2024-04-18 17:08:04.341582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.859 [2024-04-18 17:08:04.341607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.859 [2024-04-18 17:08:04.341622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.859 [2024-04-18 17:08:04.341640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.859 [2024-04-18 17:08:04.341670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.859 qpair failed and we were unable to recover it. 00:20:48.859 [2024-04-18 17:08:04.351470] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.859 [2024-04-18 17:08:04.351619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.859 [2024-04-18 17:08:04.351644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.859 [2024-04-18 17:08:04.351659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.859 [2024-04-18 17:08:04.351671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.859 [2024-04-18 17:08:04.351700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.859 qpair failed and we were unable to recover it. 00:20:48.859 [2024-04-18 17:08:04.361499] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.859 [2024-04-18 17:08:04.361608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.859 [2024-04-18 17:08:04.361633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.859 [2024-04-18 17:08:04.361648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.859 [2024-04-18 17:08:04.361660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.859 [2024-04-18 17:08:04.361689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.859 qpair failed and we were unable to recover it. 00:20:48.859 [2024-04-18 17:08:04.371503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.859 [2024-04-18 17:08:04.371613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.859 [2024-04-18 17:08:04.371638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.859 [2024-04-18 17:08:04.371653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.859 [2024-04-18 17:08:04.371665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.859 [2024-04-18 17:08:04.371693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.859 qpair failed and we were unable to recover it. 00:20:48.859 [2024-04-18 17:08:04.381637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.859 [2024-04-18 17:08:04.381772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.859 [2024-04-18 17:08:04.381797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.859 [2024-04-18 17:08:04.381811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.859 [2024-04-18 17:08:04.381824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.859 [2024-04-18 17:08:04.381852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.859 qpair failed and we were unable to recover it. 00:20:48.859 [2024-04-18 17:08:04.391613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.859 [2024-04-18 17:08:04.391773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.859 [2024-04-18 17:08:04.391798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.859 [2024-04-18 17:08:04.391813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.859 [2024-04-18 17:08:04.391825] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.859 [2024-04-18 17:08:04.391853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.859 qpair failed and we were unable to recover it. 00:20:48.859 [2024-04-18 17:08:04.401592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.859 [2024-04-18 17:08:04.401748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.859 [2024-04-18 17:08:04.401774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.859 [2024-04-18 17:08:04.401789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.859 [2024-04-18 17:08:04.401801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.859 [2024-04-18 17:08:04.401829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.859 qpair failed and we were unable to recover it. 00:20:48.859 [2024-04-18 17:08:04.411594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.859 [2024-04-18 17:08:04.411702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.411726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.411741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.411753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.411782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.860 [2024-04-18 17:08:04.421672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.860 [2024-04-18 17:08:04.421781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.421806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.421821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.421833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.421863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.860 [2024-04-18 17:08:04.431707] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.860 [2024-04-18 17:08:04.431827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.431852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.431867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.431884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.431914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.860 [2024-04-18 17:08:04.441713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.860 [2024-04-18 17:08:04.441820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.441846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.441861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.441873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.441901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.860 [2024-04-18 17:08:04.451750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.860 [2024-04-18 17:08:04.451874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.451899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.451914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.451927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.451956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.860 [2024-04-18 17:08:04.461739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.860 [2024-04-18 17:08:04.461858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.461884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.461898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.461911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.461940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.860 [2024-04-18 17:08:04.471918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.860 [2024-04-18 17:08:04.472031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.472056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.472071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.472083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.472112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.860 [2024-04-18 17:08:04.481885] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.860 [2024-04-18 17:08:04.482014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.482041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.482060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.482073] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.482103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.860 [2024-04-18 17:08:04.491871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.860 [2024-04-18 17:08:04.491990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.492016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.492031] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.492044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.492084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.860 [2024-04-18 17:08:04.501897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.860 [2024-04-18 17:08:04.502013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.502038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.502053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.502065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.502094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.860 [2024-04-18 17:08:04.511998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.860 [2024-04-18 17:08:04.512104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.512130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.512144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.512157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.512186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.860 [2024-04-18 17:08:04.522026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.860 [2024-04-18 17:08:04.522133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.522160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.522181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.522194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.522225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.860 [2024-04-18 17:08:04.531978] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.860 [2024-04-18 17:08:04.532097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.860 [2024-04-18 17:08:04.532124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.860 [2024-04-18 17:08:04.532139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.860 [2024-04-18 17:08:04.532151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.860 [2024-04-18 17:08:04.532180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.860 qpair failed and we were unable to recover it. 00:20:48.861 [2024-04-18 17:08:04.542030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.861 [2024-04-18 17:08:04.542141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.861 [2024-04-18 17:08:04.542168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.861 [2024-04-18 17:08:04.542182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.861 [2024-04-18 17:08:04.542195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.861 [2024-04-18 17:08:04.542224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.861 qpair failed and we were unable to recover it. 00:20:48.861 [2024-04-18 17:08:04.552045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.861 [2024-04-18 17:08:04.552164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.861 [2024-04-18 17:08:04.552191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.861 [2024-04-18 17:08:04.552206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.861 [2024-04-18 17:08:04.552222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.861 [2024-04-18 17:08:04.552264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.861 qpair failed and we were unable to recover it. 00:20:48.861 [2024-04-18 17:08:04.562082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.861 [2024-04-18 17:08:04.562204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.861 [2024-04-18 17:08:04.562230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.861 [2024-04-18 17:08:04.562246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.861 [2024-04-18 17:08:04.562259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:48.861 [2024-04-18 17:08:04.562288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.861 qpair failed and we were unable to recover it. 00:20:49.120 [2024-04-18 17:08:04.572090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.120 [2024-04-18 17:08:04.572203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.120 [2024-04-18 17:08:04.572228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.120 [2024-04-18 17:08:04.572243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.120 [2024-04-18 17:08:04.572255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:49.120 [2024-04-18 17:08:04.572283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:49.120 qpair failed and we were unable to recover it. 00:20:49.120 [2024-04-18 17:08:04.582096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.120 [2024-04-18 17:08:04.582208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.120 [2024-04-18 17:08:04.582233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.120 [2024-04-18 17:08:04.582248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.120 [2024-04-18 17:08:04.582261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:49.120 [2024-04-18 17:08:04.582290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:49.120 qpair failed and we were unable to recover it. 00:20:49.120 [2024-04-18 17:08:04.592144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.120 [2024-04-18 17:08:04.592271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.120 [2024-04-18 17:08:04.592298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.120 [2024-04-18 17:08:04.592314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.120 [2024-04-18 17:08:04.592326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:49.120 [2024-04-18 17:08:04.592356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:49.120 qpair failed and we were unable to recover it. 00:20:49.120 [2024-04-18 17:08:04.602172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.120 [2024-04-18 17:08:04.602328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.120 [2024-04-18 17:08:04.602355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.120 [2024-04-18 17:08:04.602370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.120 [2024-04-18 17:08:04.602395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:49.120 [2024-04-18 17:08:04.602426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:49.120 qpair failed and we were unable to recover it. 00:20:49.120 [2024-04-18 17:08:04.612195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.120 [2024-04-18 17:08:04.612320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.120 [2024-04-18 17:08:04.612351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.120 [2024-04-18 17:08:04.612367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.120 [2024-04-18 17:08:04.612379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:49.120 [2024-04-18 17:08:04.612419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:49.120 qpair failed and we were unable to recover it. 00:20:49.120 [2024-04-18 17:08:04.622231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.120 [2024-04-18 17:08:04.622339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.120 [2024-04-18 17:08:04.622365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.120 [2024-04-18 17:08:04.622386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.120 [2024-04-18 17:08:04.622400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b20000b90 00:20:49.120 [2024-04-18 17:08:04.622430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:49.120 qpair failed and we were unable to recover it. 00:20:49.120 [2024-04-18 17:08:04.632269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.120 [2024-04-18 17:08:04.632393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.120 [2024-04-18 17:08:04.632425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.120 [2024-04-18 17:08:04.632442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.120 [2024-04-18 17:08:04.632455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x125af30 00:20:49.120 [2024-04-18 17:08:04.632485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:49.120 qpair failed and we were unable to recover it. 00:20:49.120 [2024-04-18 17:08:04.642317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.120 [2024-04-18 17:08:04.642453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.120 [2024-04-18 17:08:04.642481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.120 [2024-04-18 17:08:04.642496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.120 [2024-04-18 17:08:04.642509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x125af30 00:20:49.120 [2024-04-18 17:08:04.642537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:49.120 qpair failed and we were unable to recover it. 00:20:49.120 [2024-04-18 17:08:04.652352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.120 [2024-04-18 17:08:04.652465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.120 [2024-04-18 17:08:04.652498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.120 [2024-04-18 17:08:04.652514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.120 [2024-04-18 17:08:04.652527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:49.120 [2024-04-18 17:08:04.652564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.120 qpair failed and we were unable to recover it. 00:20:49.120 [2024-04-18 17:08:04.662412] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.120 [2024-04-18 17:08:04.662519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.120 [2024-04-18 17:08:04.662545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.120 [2024-04-18 17:08:04.662560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.121 [2024-04-18 17:08:04.662573] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b28000b90 00:20:49.121 [2024-04-18 17:08:04.662602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.121 qpair failed and we were unable to recover it. 00:20:49.121 [2024-04-18 17:08:04.662722] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:20:49.121 A controller has encountered a failure and is being reset. 00:20:49.121 [2024-04-18 17:08:04.672407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.121 [2024-04-18 17:08:04.672529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.121 [2024-04-18 17:08:04.672560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.121 [2024-04-18 17:08:04.672577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.121 [2024-04-18 17:08:04.672590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b18000b90 00:20:49.121 [2024-04-18 17:08:04.672621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:49.121 qpair failed and we were unable to recover it. 00:20:49.121 [2024-04-18 17:08:04.682457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.121 [2024-04-18 17:08:04.682589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.121 [2024-04-18 17:08:04.682617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.121 [2024-04-18 17:08:04.682632] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.121 [2024-04-18 17:08:04.682645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b18000b90 00:20:49.121 [2024-04-18 17:08:04.682675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:49.121 qpair failed and we were unable to recover it. 00:20:49.121 Controller properly reset. 00:20:49.121 Initializing NVMe Controllers 00:20:49.121 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:20:49.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:20:49.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:20:49.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:20:49.121 Initialization complete. Launching workers. 00:20:49.121 Starting thread on core 1 00:20:49.121 Starting thread on core 2 00:20:49.121 Starting thread on core 3 00:20:49.121 Starting thread on core 0 00:20:49.121 17:08:04 -- host/target_disconnect.sh@59 -- # sync 00:20:49.121 00:20:49.121 real 0m11.374s 00:20:49.121 user 0m21.151s 00:20:49.121 sys 0m5.272s 00:20:49.121 17:08:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:49.121 17:08:04 -- common/autotest_common.sh@10 -- # set +x 00:20:49.121 ************************************ 00:20:49.121 END TEST nvmf_target_disconnect_tc2 00:20:49.121 ************************************ 00:20:49.121 17:08:04 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:20:49.121 17:08:04 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:49.121 17:08:04 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:20:49.121 17:08:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:49.121 17:08:04 -- nvmf/common.sh@117 -- # sync 00:20:49.121 17:08:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:49.121 17:08:04 -- nvmf/common.sh@120 -- # set +e 00:20:49.121 17:08:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:49.121 17:08:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:49.121 rmmod nvme_tcp 00:20:49.121 rmmod nvme_fabrics 00:20:49.121 rmmod nvme_keyring 00:20:49.121 17:08:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:49.121 17:08:04 -- nvmf/common.sh@124 -- # set -e 00:20:49.121 17:08:04 -- nvmf/common.sh@125 -- # return 0 00:20:49.121 17:08:04 -- nvmf/common.sh@478 -- # '[' -n 1765927 ']' 00:20:49.121 17:08:04 -- nvmf/common.sh@479 -- # killprocess 1765927 00:20:49.121 17:08:04 -- common/autotest_common.sh@936 -- # '[' -z 1765927 ']' 00:20:49.121 17:08:04 -- common/autotest_common.sh@940 -- # kill -0 1765927 00:20:49.121 17:08:04 -- common/autotest_common.sh@941 -- # uname 00:20:49.121 17:08:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:49.121 17:08:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1765927 00:20:49.379 17:08:04 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:20:49.379 17:08:04 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:20:49.379 17:08:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1765927' 00:20:49.379 killing process with pid 1765927 00:20:49.379 17:08:04 -- common/autotest_common.sh@955 -- # kill 1765927 00:20:49.379 17:08:04 -- common/autotest_common.sh@960 -- # wait 1765927 00:20:49.638 17:08:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:49.638 17:08:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:49.638 17:08:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:49.638 17:08:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.638 17:08:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.638 17:08:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.638 17:08:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.638 17:08:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.552 17:08:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:51.552 00:20:51.552 real 0m16.336s 00:20:51.552 user 0m47.108s 00:20:51.552 sys 0m7.317s 00:20:51.552 17:08:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:51.552 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:20:51.552 ************************************ 00:20:51.552 END TEST nvmf_target_disconnect 00:20:51.552 ************************************ 00:20:51.552 17:08:07 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:20:51.552 17:08:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:51.552 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:20:51.552 17:08:07 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:20:51.552 00:20:51.552 real 15m21.895s 00:20:51.552 user 35m37.910s 00:20:51.552 sys 4m10.199s 00:20:51.552 17:08:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:51.552 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:20:51.552 ************************************ 00:20:51.552 END TEST nvmf_tcp 00:20:51.552 ************************************ 00:20:51.552 17:08:07 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:20:51.552 17:08:07 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:20:51.552 17:08:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:51.552 17:08:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:51.552 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:20:51.810 ************************************ 00:20:51.810 START TEST spdkcli_nvmf_tcp 00:20:51.810 ************************************ 00:20:51.810 17:08:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:20:51.810 * Looking for test storage... 00:20:51.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:20:51.810 17:08:07 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:20:51.811 17:08:07 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:20:51.811 17:08:07 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:20:51.811 17:08:07 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.811 17:08:07 -- nvmf/common.sh@7 -- # uname -s 00:20:51.811 17:08:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.811 17:08:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.811 17:08:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.811 17:08:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.811 17:08:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.811 17:08:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.811 17:08:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.811 17:08:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.811 17:08:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.811 17:08:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.811 17:08:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.811 17:08:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.811 17:08:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.811 17:08:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.811 17:08:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.811 17:08:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.811 17:08:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.811 17:08:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.811 17:08:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.811 17:08:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.811 17:08:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.811 17:08:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.811 17:08:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.811 17:08:07 -- paths/export.sh@5 -- # export PATH 00:20:51.811 17:08:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.811 17:08:07 -- nvmf/common.sh@47 -- # : 0 00:20:51.811 17:08:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.811 17:08:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.811 17:08:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.811 17:08:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.811 17:08:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.811 17:08:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.811 17:08:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.811 17:08:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.811 17:08:07 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:20:51.811 17:08:07 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:20:51.811 17:08:07 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:20:51.811 17:08:07 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:20:51.811 17:08:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:51.811 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:20:51.811 17:08:07 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:20:51.811 17:08:07 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1767130 00:20:51.811 17:08:07 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:20:51.811 17:08:07 -- spdkcli/common.sh@34 -- # waitforlisten 1767130 00:20:51.811 17:08:07 -- common/autotest_common.sh@817 -- # '[' -z 1767130 ']' 00:20:51.811 17:08:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.811 17:08:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:51.811 17:08:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.811 17:08:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:51.811 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:20:51.811 [2024-04-18 17:08:07.444336] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:51.811 [2024-04-18 17:08:07.444444] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767130 ] 00:20:51.811 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.811 [2024-04-18 17:08:07.500523] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:52.069 [2024-04-18 17:08:07.610341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.069 [2024-04-18 17:08:07.610344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.069 17:08:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:52.069 17:08:07 -- common/autotest_common.sh@850 -- # return 0 00:20:52.069 17:08:07 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:20:52.069 17:08:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:52.069 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:20:52.069 17:08:07 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:20:52.069 17:08:07 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:20:52.069 17:08:07 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:20:52.069 17:08:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:52.069 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:20:52.069 17:08:07 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:52.069 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:52.069 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:20:52.069 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:20:52.069 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:20:52.069 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:20:52.069 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:20:52.069 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:20:52.069 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:20:52.069 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:20:52.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:20:52.069 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:20:52.069 ' 00:20:52.639 [2024-04-18 17:08:08.146764] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:55.175 [2024-04-18 17:08:10.298350] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.115 [2024-04-18 17:08:11.522656] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:20:58.657 [2024-04-18 17:08:13.785831] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:21:00.565 [2024-04-18 17:08:15.752036] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:21:01.944 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:01.944 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:01.944 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:01.944 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:01.944 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:01.944 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:01.944 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:01.944 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:01.944 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:01.944 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:01.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:01.944 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:01.944 17:08:17 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:01.944 17:08:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:01.944 17:08:17 -- common/autotest_common.sh@10 -- # set +x 00:21:01.944 17:08:17 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:01.944 17:08:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:01.944 17:08:17 -- common/autotest_common.sh@10 -- # set +x 00:21:01.944 17:08:17 -- spdkcli/nvmf.sh@69 -- # check_match 00:21:01.944 17:08:17 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:21:02.203 17:08:17 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:02.203 17:08:17 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:02.203 17:08:17 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:02.203 17:08:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:02.203 17:08:17 -- common/autotest_common.sh@10 -- # set +x 00:21:02.203 17:08:17 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:02.203 17:08:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:02.203 17:08:17 -- common/autotest_common.sh@10 -- # set +x 00:21:02.203 17:08:17 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:02.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:02.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:02.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:02.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:21:02.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:21:02.203 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:02.203 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:02.203 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:02.203 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:02.203 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:02.203 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:02.203 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:02.203 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:02.203 ' 00:21:07.482 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:21:07.482 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:21:07.482 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:07.482 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:21:07.482 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:21:07.482 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:21:07.482 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:21:07.482 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:07.482 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:21:07.482 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:21:07.482 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:21:07.482 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:21:07.482 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:21:07.482 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:21:07.482 17:08:23 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:21:07.482 17:08:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:07.482 17:08:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.482 17:08:23 -- spdkcli/nvmf.sh@90 -- # killprocess 1767130 00:21:07.482 17:08:23 -- common/autotest_common.sh@936 -- # '[' -z 1767130 ']' 00:21:07.482 17:08:23 -- common/autotest_common.sh@940 -- # kill -0 1767130 00:21:07.482 17:08:23 -- common/autotest_common.sh@941 -- # uname 00:21:07.482 17:08:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:07.482 17:08:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1767130 00:21:07.482 17:08:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:07.482 17:08:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:07.482 17:08:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1767130' 00:21:07.482 killing process with pid 1767130 00:21:07.482 17:08:23 -- common/autotest_common.sh@955 -- # kill 1767130 00:21:07.482 [2024-04-18 17:08:23.107149] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:07.482 17:08:23 -- common/autotest_common.sh@960 -- # wait 1767130 00:21:07.740 17:08:23 -- spdkcli/nvmf.sh@1 -- # cleanup 00:21:07.740 17:08:23 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:21:07.740 17:08:23 -- spdkcli/common.sh@13 -- # '[' -n 1767130 ']' 00:21:07.740 17:08:23 -- spdkcli/common.sh@14 -- # killprocess 1767130 00:21:07.740 17:08:23 -- common/autotest_common.sh@936 -- # '[' -z 1767130 ']' 00:21:07.740 17:08:23 -- common/autotest_common.sh@940 -- # kill -0 1767130 00:21:07.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1767130) - No such process 00:21:07.740 17:08:23 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1767130 is not found' 00:21:07.740 Process with pid 1767130 is not found 00:21:07.740 17:08:23 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:07.740 17:08:23 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:07.740 17:08:23 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:07.740 00:21:07.740 real 0m16.035s 00:21:07.740 user 0m33.782s 00:21:07.740 sys 0m0.828s 00:21:07.740 17:08:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:07.740 17:08:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.740 ************************************ 00:21:07.740 END TEST spdkcli_nvmf_tcp 00:21:07.740 ************************************ 00:21:07.740 17:08:23 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:07.740 17:08:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:07.740 17:08:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:07.740 17:08:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.998 ************************************ 00:21:07.998 START TEST nvmf_identify_passthru 00:21:07.998 ************************************ 00:21:07.998 17:08:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:07.998 * Looking for test storage... 00:21:07.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:07.998 17:08:23 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.998 17:08:23 -- nvmf/common.sh@7 -- # uname -s 00:21:07.998 17:08:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.998 17:08:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.998 17:08:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.998 17:08:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.998 17:08:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.998 17:08:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.998 17:08:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.998 17:08:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.998 17:08:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.998 17:08:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.998 17:08:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.998 17:08:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.998 17:08:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.998 17:08:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.998 17:08:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.998 17:08:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.998 17:08:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.998 17:08:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.998 17:08:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.998 17:08:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.998 17:08:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.998 17:08:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.998 17:08:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.998 17:08:23 -- paths/export.sh@5 -- # export PATH 00:21:07.998 17:08:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.999 17:08:23 -- nvmf/common.sh@47 -- # : 0 00:21:07.999 17:08:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:07.999 17:08:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:07.999 17:08:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.999 17:08:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.999 17:08:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.999 17:08:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:07.999 17:08:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:07.999 17:08:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:07.999 17:08:23 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.999 17:08:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.999 17:08:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.999 17:08:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.999 17:08:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.999 17:08:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.999 17:08:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.999 17:08:23 -- paths/export.sh@5 -- # export PATH 00:21:07.999 17:08:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.999 17:08:23 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:21:07.999 17:08:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:07.999 17:08:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.999 17:08:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:07.999 17:08:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:07.999 17:08:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:07.999 17:08:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.999 17:08:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:07.999 17:08:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.999 17:08:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:07.999 17:08:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:07.999 17:08:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.999 17:08:23 -- common/autotest_common.sh@10 -- # set +x 00:21:09.902 17:08:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:09.902 17:08:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:09.902 17:08:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:09.902 17:08:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:09.902 17:08:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:09.902 17:08:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:09.902 17:08:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:09.902 17:08:25 -- nvmf/common.sh@295 -- # net_devs=() 00:21:09.902 17:08:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:09.902 17:08:25 -- nvmf/common.sh@296 -- # e810=() 00:21:09.902 17:08:25 -- nvmf/common.sh@296 -- # local -ga e810 00:21:09.902 17:08:25 -- nvmf/common.sh@297 -- # x722=() 00:21:09.902 17:08:25 -- nvmf/common.sh@297 -- # local -ga x722 00:21:09.902 17:08:25 -- nvmf/common.sh@298 -- # mlx=() 00:21:09.902 17:08:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:09.902 17:08:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.902 17:08:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.902 17:08:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.902 17:08:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.902 17:08:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.902 17:08:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.902 17:08:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.902 17:08:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.902 17:08:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.902 17:08:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.902 17:08:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.902 17:08:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:09.902 17:08:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:09.902 17:08:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:09.902 17:08:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:09.902 17:08:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:09.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:09.902 17:08:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:09.902 17:08:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:09.902 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:09.902 17:08:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:09.902 17:08:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:09.902 17:08:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:09.902 17:08:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.902 17:08:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:09.902 17:08:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.902 17:08:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:09.902 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:09.902 17:08:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.903 17:08:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:09.903 17:08:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.903 17:08:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:09.903 17:08:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.903 17:08:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:09.903 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:09.903 17:08:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.903 17:08:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:09.903 17:08:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:09.903 17:08:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:09.903 17:08:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:09.903 17:08:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:09.903 17:08:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.903 17:08:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.903 17:08:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.903 17:08:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:09.903 17:08:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.903 17:08:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.903 17:08:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:09.903 17:08:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.903 17:08:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.903 17:08:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:09.903 17:08:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:09.903 17:08:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.903 17:08:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.903 17:08:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.903 17:08:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.903 17:08:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:09.903 17:08:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.161 17:08:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.161 17:08:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.161 17:08:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:10.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:21:10.161 00:21:10.161 --- 10.0.0.2 ping statistics --- 00:21:10.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.161 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:21:10.161 17:08:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:21:10.161 00:21:10.161 --- 10.0.0.1 ping statistics --- 00:21:10.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.161 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:21:10.161 17:08:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.161 17:08:25 -- nvmf/common.sh@411 -- # return 0 00:21:10.161 17:08:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:10.161 17:08:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.161 17:08:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:10.161 17:08:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:10.161 17:08:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.161 17:08:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:10.161 17:08:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:10.161 17:08:25 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:21:10.161 17:08:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:10.161 17:08:25 -- common/autotest_common.sh@10 -- # set +x 00:21:10.161 17:08:25 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:21:10.161 17:08:25 -- common/autotest_common.sh@1510 -- # bdfs=() 00:21:10.161 17:08:25 -- common/autotest_common.sh@1510 -- # local bdfs 00:21:10.162 17:08:25 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:21:10.162 17:08:25 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:21:10.162 17:08:25 -- common/autotest_common.sh@1499 -- # bdfs=() 00:21:10.162 17:08:25 -- common/autotest_common.sh@1499 -- # local bdfs 00:21:10.162 17:08:25 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:10.162 17:08:25 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:10.162 17:08:25 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:21:10.162 17:08:25 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:21:10.162 17:08:25 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:21:10.162 17:08:25 -- common/autotest_common.sh@1513 -- # echo 0000:88:00.0 00:21:10.162 17:08:25 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:21:10.162 17:08:25 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:21:10.162 17:08:25 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:21:10.162 17:08:25 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:21:10.162 17:08:25 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:21:10.162 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.359 17:08:29 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:21:14.359 17:08:29 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:21:14.359 17:08:29 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:21:14.359 17:08:29 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:21:14.359 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.567 17:08:34 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:21:18.567 17:08:34 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:21:18.567 17:08:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:18.567 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.567 17:08:34 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:21:18.567 17:08:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:18.567 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.567 17:08:34 -- target/identify_passthru.sh@31 -- # nvmfpid=1771766 00:21:18.567 17:08:34 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:18.567 17:08:34 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.567 17:08:34 -- target/identify_passthru.sh@35 -- # waitforlisten 1771766 00:21:18.567 17:08:34 -- common/autotest_common.sh@817 -- # '[' -z 1771766 ']' 00:21:18.567 17:08:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.567 17:08:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:18.567 17:08:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.567 17:08:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:18.567 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.567 [2024-04-18 17:08:34.165960] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:21:18.567 [2024-04-18 17:08:34.166063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.567 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.567 [2024-04-18 17:08:34.231621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.826 [2024-04-18 17:08:34.342337] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.826 [2024-04-18 17:08:34.342407] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.826 [2024-04-18 17:08:34.342438] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.826 [2024-04-18 17:08:34.342450] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.826 [2024-04-18 17:08:34.342460] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.826 [2024-04-18 17:08:34.342589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.826 [2024-04-18 17:08:34.342655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.826 [2024-04-18 17:08:34.342720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.826 [2024-04-18 17:08:34.342723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.826 17:08:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:18.826 17:08:34 -- common/autotest_common.sh@850 -- # return 0 00:21:18.826 17:08:34 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:21:18.826 17:08:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.826 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.826 INFO: Log level set to 20 00:21:18.826 INFO: Requests: 00:21:18.826 { 00:21:18.826 "jsonrpc": "2.0", 00:21:18.826 "method": "nvmf_set_config", 00:21:18.826 "id": 1, 00:21:18.826 "params": { 00:21:18.826 "admin_cmd_passthru": { 00:21:18.826 "identify_ctrlr": true 00:21:18.826 } 00:21:18.826 } 00:21:18.826 } 00:21:18.826 00:21:18.826 INFO: response: 00:21:18.826 { 00:21:18.826 "jsonrpc": "2.0", 00:21:18.826 "id": 1, 00:21:18.826 "result": true 00:21:18.826 } 00:21:18.826 00:21:18.826 17:08:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.826 17:08:34 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:21:18.826 17:08:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.826 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.826 INFO: Setting log level to 20 00:21:18.826 INFO: Setting log level to 20 00:21:18.826 INFO: Log level set to 20 00:21:18.826 INFO: Log level set to 20 00:21:18.826 INFO: Requests: 00:21:18.826 { 00:21:18.826 "jsonrpc": "2.0", 00:21:18.826 "method": "framework_start_init", 00:21:18.826 "id": 1 00:21:18.826 } 00:21:18.826 00:21:18.826 INFO: Requests: 00:21:18.826 { 00:21:18.826 "jsonrpc": "2.0", 00:21:18.826 "method": "framework_start_init", 00:21:18.826 "id": 1 00:21:18.826 } 00:21:18.826 00:21:18.826 [2024-04-18 17:08:34.487788] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:21:18.826 INFO: response: 00:21:18.826 { 00:21:18.826 "jsonrpc": "2.0", 00:21:18.826 "id": 1, 00:21:18.826 "result": true 00:21:18.826 } 00:21:18.826 00:21:18.826 INFO: response: 00:21:18.826 { 00:21:18.826 "jsonrpc": "2.0", 00:21:18.826 "id": 1, 00:21:18.826 "result": true 00:21:18.826 } 00:21:18.826 00:21:18.826 17:08:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.826 17:08:34 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:18.826 17:08:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.826 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.826 INFO: Setting log level to 40 00:21:18.826 INFO: Setting log level to 40 00:21:18.826 INFO: Setting log level to 40 00:21:18.826 [2024-04-18 17:08:34.497909] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.826 17:08:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.826 17:08:34 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:21:18.826 17:08:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:18.826 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.826 17:08:34 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:21:18.826 17:08:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.087 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:21:22.376 Nvme0n1 00:21:22.376 17:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.376 17:08:37 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:21:22.376 17:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.376 17:08:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.376 17:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.376 17:08:37 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:22.376 17:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.376 17:08:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.376 17:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.376 17:08:37 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.376 17:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.376 17:08:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.376 [2024-04-18 17:08:37.395591] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.376 17:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.376 17:08:37 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:21:22.376 17:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.376 17:08:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.376 [2024-04-18 17:08:37.403317] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:22.376 [ 00:21:22.376 { 00:21:22.376 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:22.376 "subtype": "Discovery", 00:21:22.376 "listen_addresses": [], 00:21:22.376 "allow_any_host": true, 00:21:22.376 "hosts": [] 00:21:22.376 }, 00:21:22.376 { 00:21:22.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.376 "subtype": "NVMe", 00:21:22.376 "listen_addresses": [ 00:21:22.376 { 00:21:22.376 "transport": "TCP", 00:21:22.376 "trtype": "TCP", 00:21:22.376 "adrfam": "IPv4", 00:21:22.376 "traddr": "10.0.0.2", 00:21:22.376 "trsvcid": "4420" 00:21:22.376 } 00:21:22.376 ], 00:21:22.376 "allow_any_host": true, 00:21:22.376 "hosts": [], 00:21:22.376 "serial_number": "SPDK00000000000001", 00:21:22.376 "model_number": "SPDK bdev Controller", 00:21:22.376 "max_namespaces": 1, 00:21:22.376 "min_cntlid": 1, 00:21:22.376 "max_cntlid": 65519, 00:21:22.376 "namespaces": [ 00:21:22.376 { 00:21:22.376 "nsid": 1, 00:21:22.376 "bdev_name": "Nvme0n1", 00:21:22.376 "name": "Nvme0n1", 00:21:22.376 "nguid": "4B6E5771313A43E2A305A6A2C700B5D7", 00:21:22.376 "uuid": "4b6e5771-313a-43e2-a305-a6a2c700b5d7" 00:21:22.376 } 00:21:22.376 ] 00:21:22.376 } 00:21:22.376 ] 00:21:22.376 17:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.376 17:08:37 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:22.376 17:08:37 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:21:22.376 17:08:37 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:21:22.376 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.376 17:08:37 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:21:22.376 17:08:37 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:22.376 17:08:37 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:21:22.376 17:08:37 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:21:22.376 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.376 17:08:37 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:21:22.376 17:08:37 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:21:22.376 17:08:37 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:21:22.376 17:08:37 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.376 17:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.376 17:08:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.376 17:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.376 17:08:37 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:21:22.376 17:08:37 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:21:22.376 17:08:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:22.376 17:08:37 -- nvmf/common.sh@117 -- # sync 00:21:22.376 17:08:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:22.376 17:08:37 -- nvmf/common.sh@120 -- # set +e 00:21:22.376 17:08:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:22.376 17:08:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:22.376 rmmod nvme_tcp 00:21:22.376 rmmod nvme_fabrics 00:21:22.376 rmmod nvme_keyring 00:21:22.376 17:08:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:22.376 17:08:37 -- nvmf/common.sh@124 -- # set -e 00:21:22.376 17:08:37 -- nvmf/common.sh@125 -- # return 0 00:21:22.376 17:08:37 -- nvmf/common.sh@478 -- # '[' -n 1771766 ']' 00:21:22.376 17:08:37 -- nvmf/common.sh@479 -- # killprocess 1771766 00:21:22.376 17:08:37 -- common/autotest_common.sh@936 -- # '[' -z 1771766 ']' 00:21:22.376 17:08:37 -- common/autotest_common.sh@940 -- # kill -0 1771766 00:21:22.376 17:08:37 -- common/autotest_common.sh@941 -- # uname 00:21:22.376 17:08:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:22.376 17:08:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1771766 00:21:22.376 17:08:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:22.376 17:08:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:22.376 17:08:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1771766' 00:21:22.376 killing process with pid 1771766 00:21:22.376 17:08:37 -- common/autotest_common.sh@955 -- # kill 1771766 00:21:22.376 [2024-04-18 17:08:37.812951] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:22.376 17:08:37 -- common/autotest_common.sh@960 -- # wait 1771766 00:21:23.819 17:08:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:23.819 17:08:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:23.819 17:08:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:23.819 17:08:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.819 17:08:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:23.819 17:08:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.819 17:08:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:23.819 17:08:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.736 17:08:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:25.736 00:21:25.736 real 0m17.947s 00:21:25.736 user 0m26.419s 00:21:25.736 sys 0m2.354s 00:21:25.736 17:08:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:25.736 17:08:41 -- common/autotest_common.sh@10 -- # set +x 00:21:25.736 ************************************ 00:21:25.736 END TEST nvmf_identify_passthru 00:21:25.736 ************************************ 00:21:25.996 17:08:41 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:21:25.996 17:08:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:25.996 17:08:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:25.996 17:08:41 -- common/autotest_common.sh@10 -- # set +x 00:21:25.996 ************************************ 00:21:25.996 START TEST nvmf_dif 00:21:25.996 ************************************ 00:21:25.996 17:08:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:21:25.996 * Looking for test storage... 00:21:25.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:25.996 17:08:41 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.996 17:08:41 -- nvmf/common.sh@7 -- # uname -s 00:21:25.996 17:08:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.996 17:08:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.996 17:08:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.996 17:08:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.996 17:08:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.996 17:08:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.996 17:08:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.996 17:08:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.996 17:08:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.996 17:08:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.996 17:08:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.996 17:08:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.996 17:08:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.996 17:08:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.996 17:08:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.996 17:08:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.996 17:08:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.996 17:08:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.996 17:08:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.996 17:08:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.996 17:08:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.996 17:08:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.996 17:08:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.996 17:08:41 -- paths/export.sh@5 -- # export PATH 00:21:25.996 17:08:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.996 17:08:41 -- nvmf/common.sh@47 -- # : 0 00:21:25.996 17:08:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.996 17:08:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.996 17:08:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.996 17:08:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.996 17:08:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.996 17:08:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.996 17:08:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.996 17:08:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.996 17:08:41 -- target/dif.sh@15 -- # NULL_META=16 00:21:25.996 17:08:41 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:25.996 17:08:41 -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:25.996 17:08:41 -- target/dif.sh@15 -- # NULL_DIF=1 00:21:25.996 17:08:41 -- target/dif.sh@135 -- # nvmftestinit 00:21:25.996 17:08:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:25.996 17:08:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.996 17:08:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:25.996 17:08:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:25.996 17:08:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:25.996 17:08:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.996 17:08:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:25.996 17:08:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.996 17:08:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:25.996 17:08:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:25.996 17:08:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:25.996 17:08:41 -- common/autotest_common.sh@10 -- # set +x 00:21:28.529 17:08:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:28.529 17:08:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:28.529 17:08:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:28.529 17:08:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:28.529 17:08:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:28.529 17:08:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:28.529 17:08:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:28.529 17:08:43 -- nvmf/common.sh@295 -- # net_devs=() 00:21:28.529 17:08:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:28.529 17:08:43 -- nvmf/common.sh@296 -- # e810=() 00:21:28.529 17:08:43 -- nvmf/common.sh@296 -- # local -ga e810 00:21:28.529 17:08:43 -- nvmf/common.sh@297 -- # x722=() 00:21:28.529 17:08:43 -- nvmf/common.sh@297 -- # local -ga x722 00:21:28.529 17:08:43 -- nvmf/common.sh@298 -- # mlx=() 00:21:28.529 17:08:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:28.529 17:08:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.529 17:08:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.529 17:08:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.529 17:08:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.529 17:08:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.529 17:08:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.529 17:08:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.529 17:08:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.529 17:08:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.529 17:08:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.529 17:08:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.529 17:08:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:28.529 17:08:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:28.529 17:08:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:28.529 17:08:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.529 17:08:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:28.529 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:28.529 17:08:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.529 17:08:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:28.529 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:28.529 17:08:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:28.529 17:08:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.529 17:08:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.529 17:08:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:28.529 17:08:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.529 17:08:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:28.529 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:28.529 17:08:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.529 17:08:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.529 17:08:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.529 17:08:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:28.529 17:08:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.529 17:08:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:28.529 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:28.529 17:08:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.529 17:08:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:28.529 17:08:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:28.529 17:08:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:28.529 17:08:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:28.529 17:08:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.529 17:08:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.529 17:08:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.529 17:08:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:28.529 17:08:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.529 17:08:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.529 17:08:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:28.529 17:08:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.529 17:08:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.529 17:08:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:28.529 17:08:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:28.529 17:08:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.529 17:08:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.529 17:08:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.529 17:08:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.529 17:08:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:28.529 17:08:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.529 17:08:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.529 17:08:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.529 17:08:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:28.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:21:28.529 00:21:28.529 --- 10.0.0.2 ping statistics --- 00:21:28.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.529 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:21:28.529 17:08:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:21:28.529 00:21:28.529 --- 10.0.0.1 ping statistics --- 00:21:28.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.529 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:21:28.529 17:08:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.529 17:08:43 -- nvmf/common.sh@411 -- # return 0 00:21:28.529 17:08:43 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:21:28.529 17:08:43 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:21:29.096 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:21:29.096 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:29.355 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:21:29.355 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:21:29.355 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:21:29.355 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:21:29.355 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:21:29.355 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:21:29.355 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:21:29.355 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:21:29.355 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:21:29.355 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:21:29.355 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:21:29.355 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:21:29.355 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:21:29.355 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:21:29.355 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:21:29.355 17:08:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.355 17:08:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:29.355 17:08:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:29.355 17:08:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.355 17:08:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:29.355 17:08:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:29.355 17:08:45 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:29.355 17:08:45 -- target/dif.sh@137 -- # nvmfappstart 00:21:29.355 17:08:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:29.355 17:08:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:29.355 17:08:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.355 17:08:45 -- nvmf/common.sh@470 -- # nvmfpid=1774916 00:21:29.355 17:08:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:29.355 17:08:45 -- nvmf/common.sh@471 -- # waitforlisten 1774916 00:21:29.355 17:08:45 -- common/autotest_common.sh@817 -- # '[' -z 1774916 ']' 00:21:29.355 17:08:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.355 17:08:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:29.355 17:08:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.355 17:08:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:29.355 17:08:45 -- common/autotest_common.sh@10 -- # set +x 00:21:29.355 [2024-04-18 17:08:45.055193] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:21:29.355 [2024-04-18 17:08:45.055267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.615 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.615 [2024-04-18 17:08:45.125677] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.615 [2024-04-18 17:08:45.240321] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.615 [2024-04-18 17:08:45.240402] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.615 [2024-04-18 17:08:45.240420] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.615 [2024-04-18 17:08:45.240434] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.615 [2024-04-18 17:08:45.240460] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.615 [2024-04-18 17:08:45.240492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.553 17:08:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:30.553 17:08:46 -- common/autotest_common.sh@850 -- # return 0 00:21:30.553 17:08:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:30.553 17:08:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:30.553 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:21:30.553 17:08:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.553 17:08:46 -- target/dif.sh@139 -- # create_transport 00:21:30.553 17:08:46 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:30.553 17:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.553 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:21:30.553 [2024-04-18 17:08:46.033166] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.553 17:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.553 17:08:46 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:30.553 17:08:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:30.553 17:08:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:30.553 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:21:30.553 ************************************ 00:21:30.553 START TEST fio_dif_1_default 00:21:30.553 ************************************ 00:21:30.553 17:08:46 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:21:30.553 17:08:46 -- target/dif.sh@86 -- # create_subsystems 0 00:21:30.553 17:08:46 -- target/dif.sh@28 -- # local sub 00:21:30.553 17:08:46 -- target/dif.sh@30 -- # for sub in "$@" 00:21:30.553 17:08:46 -- target/dif.sh@31 -- # create_subsystem 0 00:21:30.553 17:08:46 -- target/dif.sh@18 -- # local sub_id=0 00:21:30.553 17:08:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:30.553 17:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.553 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:21:30.553 bdev_null0 00:21:30.553 17:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.553 17:08:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:30.553 17:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.553 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:21:30.553 17:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.553 17:08:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:30.553 17:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.553 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:21:30.553 17:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.553 17:08:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:30.553 17:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.553 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:21:30.553 [2024-04-18 17:08:46.173706] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.553 17:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.554 17:08:46 -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:30.554 17:08:46 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:30.554 17:08:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:30.554 17:08:46 -- nvmf/common.sh@521 -- # config=() 00:21:30.554 17:08:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:30.554 17:08:46 -- nvmf/common.sh@521 -- # local subsystem config 00:21:30.554 17:08:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:30.554 17:08:46 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:30.554 17:08:46 -- target/dif.sh@82 -- # gen_fio_conf 00:21:30.554 17:08:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:30.554 { 00:21:30.554 "params": { 00:21:30.554 "name": "Nvme$subsystem", 00:21:30.554 "trtype": "$TEST_TRANSPORT", 00:21:30.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.554 "adrfam": "ipv4", 00:21:30.554 "trsvcid": "$NVMF_PORT", 00:21:30.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.554 "hdgst": ${hdgst:-false}, 00:21:30.554 "ddgst": ${ddgst:-false} 00:21:30.554 }, 00:21:30.554 "method": "bdev_nvme_attach_controller" 00:21:30.554 } 00:21:30.554 EOF 00:21:30.554 )") 00:21:30.554 17:08:46 -- target/dif.sh@54 -- # local file 00:21:30.554 17:08:46 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:30.554 17:08:46 -- target/dif.sh@56 -- # cat 00:21:30.554 17:08:46 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:30.554 17:08:46 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:30.554 17:08:46 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:30.554 17:08:46 -- common/autotest_common.sh@1327 -- # shift 00:21:30.554 17:08:46 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:30.554 17:08:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:30.554 17:08:46 -- nvmf/common.sh@543 -- # cat 00:21:30.554 17:08:46 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:30.554 17:08:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:30.554 17:08:46 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:30.554 17:08:46 -- target/dif.sh@72 -- # (( file <= files )) 00:21:30.554 17:08:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:30.554 17:08:46 -- nvmf/common.sh@545 -- # jq . 00:21:30.554 17:08:46 -- nvmf/common.sh@546 -- # IFS=, 00:21:30.554 17:08:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:30.554 "params": { 00:21:30.554 "name": "Nvme0", 00:21:30.554 "trtype": "tcp", 00:21:30.554 "traddr": "10.0.0.2", 00:21:30.554 "adrfam": "ipv4", 00:21:30.554 "trsvcid": "4420", 00:21:30.554 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:30.554 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:30.554 "hdgst": false, 00:21:30.554 "ddgst": false 00:21:30.554 }, 00:21:30.554 "method": "bdev_nvme_attach_controller" 00:21:30.554 }' 00:21:30.554 17:08:46 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:30.554 17:08:46 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:30.554 17:08:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:30.554 17:08:46 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:30.554 17:08:46 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:30.554 17:08:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:30.554 17:08:46 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:30.554 17:08:46 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:30.554 17:08:46 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:30.554 17:08:46 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:30.813 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:30.813 fio-3.35 00:21:30.813 Starting 1 thread 00:21:30.813 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.033 00:21:43.033 filename0: (groupid=0, jobs=1): err= 0: pid=1775282: Thu Apr 18 17:08:56 2024 00:21:43.033 read: IOPS=190, BW=762KiB/s (780kB/s)(7616KiB/10001msec) 00:21:43.033 slat (nsec): min=5092, max=40309, avg=9619.61, stdev=3033.44 00:21:43.033 clat (usec): min=661, max=45271, avg=20978.60, stdev=20273.18 00:21:43.033 lat (usec): min=668, max=45287, avg=20988.22, stdev=20273.41 00:21:43.033 clat percentiles (usec): 00:21:43.033 | 1.00th=[ 668], 5.00th=[ 676], 10.00th=[ 685], 20.00th=[ 693], 00:21:43.033 | 30.00th=[ 709], 40.00th=[ 750], 50.00th=[ 848], 60.00th=[41157], 00:21:43.033 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:43.033 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45351], 99.95th=[45351], 00:21:43.033 | 99.99th=[45351] 00:21:43.033 bw ( KiB/s): min= 704, max= 769, per=99.93%, avg=761.32, stdev=17.15, samples=19 00:21:43.033 iops : min= 176, max= 192, avg=190.32, stdev= 4.28, samples=19 00:21:43.033 lat (usec) : 750=41.12%, 1000=8.88% 00:21:43.033 lat (msec) : 50=50.00% 00:21:43.033 cpu : usr=89.66%, sys=9.99%, ctx=14, majf=0, minf=223 00:21:43.033 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:43.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.033 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.033 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:43.033 00:21:43.033 Run status group 0 (all jobs): 00:21:43.033 READ: bw=762KiB/s (780kB/s), 762KiB/s-762KiB/s (780kB/s-780kB/s), io=7616KiB (7799kB), run=10001-10001msec 00:21:43.033 17:08:57 -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:43.033 17:08:57 -- target/dif.sh@43 -- # local sub 00:21:43.033 17:08:57 -- target/dif.sh@45 -- # for sub in "$@" 00:21:43.033 17:08:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:43.033 17:08:57 -- target/dif.sh@36 -- # local sub_id=0 00:21:43.033 17:08:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:43.033 17:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.033 17:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.033 17:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.033 17:08:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:43.033 17:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.033 17:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.033 17:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.033 00:21:43.033 real 0m11.017s 00:21:43.033 user 0m9.880s 00:21:43.033 sys 0m1.274s 00:21:43.033 17:08:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:43.033 17:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.033 ************************************ 00:21:43.033 END TEST fio_dif_1_default 00:21:43.033 ************************************ 00:21:43.033 17:08:57 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:43.033 17:08:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:43.033 17:08:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.033 17:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.033 ************************************ 00:21:43.033 START TEST fio_dif_1_multi_subsystems 00:21:43.033 ************************************ 00:21:43.033 17:08:57 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:21:43.033 17:08:57 -- target/dif.sh@92 -- # local files=1 00:21:43.033 17:08:57 -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:43.033 17:08:57 -- target/dif.sh@28 -- # local sub 00:21:43.034 17:08:57 -- target/dif.sh@30 -- # for sub in "$@" 00:21:43.034 17:08:57 -- target/dif.sh@31 -- # create_subsystem 0 00:21:43.034 17:08:57 -- target/dif.sh@18 -- # local sub_id=0 00:21:43.034 17:08:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:43.034 17:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.034 17:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.034 bdev_null0 00:21:43.034 17:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.034 17:08:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:43.034 17:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.034 17:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.034 17:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.034 17:08:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:43.034 17:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.034 17:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.034 17:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.034 17:08:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:43.034 17:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.034 17:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.034 [2024-04-18 17:08:57.295375] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.034 17:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.034 17:08:57 -- target/dif.sh@30 -- # for sub in "$@" 00:21:43.034 17:08:57 -- target/dif.sh@31 -- # create_subsystem 1 00:21:43.034 17:08:57 -- target/dif.sh@18 -- # local sub_id=1 00:21:43.034 17:08:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:43.034 17:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.034 17:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.034 bdev_null1 00:21:43.034 17:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.034 17:08:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:43.034 17:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.034 17:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.034 17:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.034 17:08:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:43.034 17:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.034 17:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.034 17:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.034 17:08:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.034 17:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.034 17:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.034 17:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.034 17:08:57 -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:43.034 17:08:57 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:43.034 17:08:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:43.034 17:08:57 -- nvmf/common.sh@521 -- # config=() 00:21:43.034 17:08:57 -- nvmf/common.sh@521 -- # local subsystem config 00:21:43.034 17:08:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:43.034 17:08:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:43.034 { 00:21:43.034 "params": { 00:21:43.034 "name": "Nvme$subsystem", 00:21:43.034 "trtype": "$TEST_TRANSPORT", 00:21:43.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.034 "adrfam": "ipv4", 00:21:43.034 "trsvcid": "$NVMF_PORT", 00:21:43.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.034 "hdgst": ${hdgst:-false}, 00:21:43.034 "ddgst": ${ddgst:-false} 00:21:43.034 }, 00:21:43.034 "method": "bdev_nvme_attach_controller" 00:21:43.034 } 00:21:43.034 EOF 00:21:43.034 )") 00:21:43.034 17:08:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:43.034 17:08:57 -- target/dif.sh@82 -- # gen_fio_conf 00:21:43.034 17:08:57 -- target/dif.sh@54 -- # local file 00:21:43.034 17:08:57 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:43.034 17:08:57 -- target/dif.sh@56 -- # cat 00:21:43.034 17:08:57 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:43.034 17:08:57 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:43.034 17:08:57 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:43.034 17:08:57 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:43.034 17:08:57 -- common/autotest_common.sh@1327 -- # shift 00:21:43.034 17:08:57 -- nvmf/common.sh@543 -- # cat 00:21:43.034 17:08:57 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:43.034 17:08:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:43.034 17:08:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:43.034 17:08:57 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:43.034 17:08:57 -- target/dif.sh@72 -- # (( file <= files )) 00:21:43.034 17:08:57 -- target/dif.sh@73 -- # cat 00:21:43.034 17:08:57 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:43.034 17:08:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:43.034 17:08:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:43.034 17:08:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:43.034 { 00:21:43.034 "params": { 00:21:43.034 "name": "Nvme$subsystem", 00:21:43.034 "trtype": "$TEST_TRANSPORT", 00:21:43.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.034 "adrfam": "ipv4", 00:21:43.034 "trsvcid": "$NVMF_PORT", 00:21:43.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.034 "hdgst": ${hdgst:-false}, 00:21:43.034 "ddgst": ${ddgst:-false} 00:21:43.034 }, 00:21:43.034 "method": "bdev_nvme_attach_controller" 00:21:43.034 } 00:21:43.034 EOF 00:21:43.034 )") 00:21:43.034 17:08:57 -- nvmf/common.sh@543 -- # cat 00:21:43.034 17:08:57 -- target/dif.sh@72 -- # (( file++ )) 00:21:43.034 17:08:57 -- target/dif.sh@72 -- # (( file <= files )) 00:21:43.034 17:08:57 -- nvmf/common.sh@545 -- # jq . 00:21:43.034 17:08:57 -- nvmf/common.sh@546 -- # IFS=, 00:21:43.034 17:08:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:43.034 "params": { 00:21:43.034 "name": "Nvme0", 00:21:43.034 "trtype": "tcp", 00:21:43.034 "traddr": "10.0.0.2", 00:21:43.034 "adrfam": "ipv4", 00:21:43.034 "trsvcid": "4420", 00:21:43.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:43.034 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:43.034 "hdgst": false, 00:21:43.034 "ddgst": false 00:21:43.034 }, 00:21:43.034 "method": "bdev_nvme_attach_controller" 00:21:43.034 },{ 00:21:43.034 "params": { 00:21:43.034 "name": "Nvme1", 00:21:43.034 "trtype": "tcp", 00:21:43.034 "traddr": "10.0.0.2", 00:21:43.034 "adrfam": "ipv4", 00:21:43.034 "trsvcid": "4420", 00:21:43.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.034 "hdgst": false, 00:21:43.034 "ddgst": false 00:21:43.034 }, 00:21:43.034 "method": "bdev_nvme_attach_controller" 00:21:43.034 }' 00:21:43.034 17:08:57 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:43.034 17:08:57 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:43.034 17:08:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:43.034 17:08:57 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:43.034 17:08:57 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:43.034 17:08:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:43.034 17:08:57 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:43.034 17:08:57 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:43.034 17:08:57 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:43.034 17:08:57 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:43.034 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:43.034 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:43.034 fio-3.35 00:21:43.034 Starting 2 threads 00:21:43.034 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.006 00:21:53.006 filename0: (groupid=0, jobs=1): err= 0: pid=1776700: Thu Apr 18 17:09:08 2024 00:21:53.006 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:21:53.006 slat (nsec): min=6905, max=87798, avg=9509.33, stdev=4766.55 00:21:53.006 clat (usec): min=661, max=44002, avg=21027.65, stdev=20221.05 00:21:53.006 lat (usec): min=669, max=44013, avg=21037.16, stdev=20221.50 00:21:53.006 clat percentiles (usec): 00:21:53.006 | 1.00th=[ 685], 5.00th=[ 701], 10.00th=[ 717], 20.00th=[ 742], 00:21:53.006 | 30.00th=[ 758], 40.00th=[ 816], 50.00th=[40633], 60.00th=[41157], 00:21:53.006 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:53.006 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:21:53.006 | 99.99th=[43779] 00:21:53.006 bw ( KiB/s): min= 704, max= 768, per=66.34%, avg=761.26, stdev=20.18, samples=19 00:21:53.006 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:21:53.006 lat (usec) : 750=25.84%, 1000=23.21% 00:21:53.006 lat (msec) : 2=0.84%, 50=50.11% 00:21:53.006 cpu : usr=94.91%, sys=4.77%, ctx=14, majf=0, minf=143 00:21:53.006 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:53.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.006 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.006 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:53.006 filename1: (groupid=0, jobs=1): err= 0: pid=1776701: Thu Apr 18 17:09:08 2024 00:21:53.006 read: IOPS=97, BW=388KiB/s (398kB/s)(3888KiB/10015msec) 00:21:53.006 slat (nsec): min=6859, max=27819, avg=9462.61, stdev=3509.25 00:21:53.006 clat (usec): min=40854, max=44058, avg=41181.29, stdev=435.65 00:21:53.006 lat (usec): min=40861, max=44072, avg=41190.76, stdev=435.63 00:21:53.006 clat percentiles (usec): 00:21:53.006 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:21:53.006 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:53.006 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:21:53.006 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:21:53.006 | 99.99th=[44303] 00:21:53.006 bw ( KiB/s): min= 384, max= 416, per=33.74%, avg=387.20, stdev= 9.85, samples=20 00:21:53.006 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:21:53.006 lat (msec) : 50=100.00% 00:21:53.006 cpu : usr=94.47%, sys=5.21%, ctx=14, majf=0, minf=135 00:21:53.006 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:53.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.006 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.007 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:53.007 00:21:53.007 Run status group 0 (all jobs): 00:21:53.007 READ: bw=1147KiB/s (1175kB/s), 388KiB/s-760KiB/s (398kB/s-778kB/s), io=11.2MiB (11.8MB), run=10003-10015msec 00:21:53.007 17:09:08 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:53.007 17:09:08 -- target/dif.sh@43 -- # local sub 00:21:53.007 17:09:08 -- target/dif.sh@45 -- # for sub in "$@" 00:21:53.007 17:09:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:53.007 17:09:08 -- target/dif.sh@36 -- # local sub_id=0 00:21:53.007 17:09:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:53.007 17:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.007 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:21:53.007 17:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.007 17:09:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:53.007 17:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.007 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:21:53.007 17:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.007 17:09:08 -- target/dif.sh@45 -- # for sub in "$@" 00:21:53.007 17:09:08 -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:53.007 17:09:08 -- target/dif.sh@36 -- # local sub_id=1 00:21:53.007 17:09:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.007 17:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.007 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:21:53.007 17:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.007 17:09:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:53.007 17:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.007 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:21:53.007 17:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.007 00:21:53.007 real 0m11.359s 00:21:53.007 user 0m20.305s 00:21:53.007 sys 0m1.309s 00:21:53.007 17:09:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:53.007 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:21:53.007 ************************************ 00:21:53.007 END TEST fio_dif_1_multi_subsystems 00:21:53.007 ************************************ 00:21:53.007 17:09:08 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:53.007 17:09:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:53.007 17:09:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:53.007 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:21:53.266 ************************************ 00:21:53.266 START TEST fio_dif_rand_params 00:21:53.266 ************************************ 00:21:53.266 17:09:08 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:21:53.266 17:09:08 -- target/dif.sh@100 -- # local NULL_DIF 00:21:53.266 17:09:08 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:53.266 17:09:08 -- target/dif.sh@103 -- # NULL_DIF=3 00:21:53.266 17:09:08 -- target/dif.sh@103 -- # bs=128k 00:21:53.266 17:09:08 -- target/dif.sh@103 -- # numjobs=3 00:21:53.266 17:09:08 -- target/dif.sh@103 -- # iodepth=3 00:21:53.266 17:09:08 -- target/dif.sh@103 -- # runtime=5 00:21:53.266 17:09:08 -- target/dif.sh@105 -- # create_subsystems 0 00:21:53.266 17:09:08 -- target/dif.sh@28 -- # local sub 00:21:53.266 17:09:08 -- target/dif.sh@30 -- # for sub in "$@" 00:21:53.266 17:09:08 -- target/dif.sh@31 -- # create_subsystem 0 00:21:53.266 17:09:08 -- target/dif.sh@18 -- # local sub_id=0 00:21:53.266 17:09:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:53.266 17:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.266 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:21:53.266 bdev_null0 00:21:53.266 17:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.266 17:09:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:53.266 17:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.266 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:21:53.266 17:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.267 17:09:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:53.267 17:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.267 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:21:53.267 17:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.267 17:09:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:53.267 17:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.267 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:21:53.267 [2024-04-18 17:09:08.776018] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.267 17:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.267 17:09:08 -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:53.267 17:09:08 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:53.267 17:09:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:53.267 17:09:08 -- nvmf/common.sh@521 -- # config=() 00:21:53.267 17:09:08 -- nvmf/common.sh@521 -- # local subsystem config 00:21:53.267 17:09:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:53.267 17:09:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:53.267 { 00:21:53.267 "params": { 00:21:53.267 "name": "Nvme$subsystem", 00:21:53.267 "trtype": "$TEST_TRANSPORT", 00:21:53.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.267 "adrfam": "ipv4", 00:21:53.267 "trsvcid": "$NVMF_PORT", 00:21:53.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.267 "hdgst": ${hdgst:-false}, 00:21:53.267 "ddgst": ${ddgst:-false} 00:21:53.267 }, 00:21:53.267 "method": "bdev_nvme_attach_controller" 00:21:53.267 } 00:21:53.267 EOF 00:21:53.267 )") 00:21:53.267 17:09:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:53.267 17:09:08 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:53.267 17:09:08 -- target/dif.sh@82 -- # gen_fio_conf 00:21:53.267 17:09:08 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:53.267 17:09:08 -- target/dif.sh@54 -- # local file 00:21:53.267 17:09:08 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:53.267 17:09:08 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:53.267 17:09:08 -- target/dif.sh@56 -- # cat 00:21:53.267 17:09:08 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:53.267 17:09:08 -- common/autotest_common.sh@1327 -- # shift 00:21:53.267 17:09:08 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:53.267 17:09:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.267 17:09:08 -- nvmf/common.sh@543 -- # cat 00:21:53.267 17:09:08 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:53.267 17:09:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:53.267 17:09:08 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:53.267 17:09:08 -- target/dif.sh@72 -- # (( file <= files )) 00:21:53.267 17:09:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:53.267 17:09:08 -- nvmf/common.sh@545 -- # jq . 00:21:53.267 17:09:08 -- nvmf/common.sh@546 -- # IFS=, 00:21:53.267 17:09:08 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:53.267 "params": { 00:21:53.267 "name": "Nvme0", 00:21:53.267 "trtype": "tcp", 00:21:53.267 "traddr": "10.0.0.2", 00:21:53.267 "adrfam": "ipv4", 00:21:53.267 "trsvcid": "4420", 00:21:53.267 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:53.267 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:53.267 "hdgst": false, 00:21:53.267 "ddgst": false 00:21:53.267 }, 00:21:53.267 "method": "bdev_nvme_attach_controller" 00:21:53.267 }' 00:21:53.267 17:09:08 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:53.267 17:09:08 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:53.267 17:09:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.267 17:09:08 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:53.267 17:09:08 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:53.267 17:09:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:53.267 17:09:08 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:53.267 17:09:08 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:53.267 17:09:08 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:53.267 17:09:08 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:53.526 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:53.526 ... 00:21:53.526 fio-3.35 00:21:53.526 Starting 3 threads 00:21:53.526 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.124 00:22:00.124 filename0: (groupid=0, jobs=1): err= 0: pid=1778724: Thu Apr 18 17:09:14 2024 00:22:00.124 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(134MiB/5002msec) 00:22:00.124 slat (nsec): min=7186, max=45316, avg=13079.73, stdev=3882.21 00:22:00.124 clat (usec): min=4875, max=87701, avg=14019.65, stdev=9192.13 00:22:00.124 lat (usec): min=4886, max=87714, avg=14032.73, stdev=9192.06 00:22:00.124 clat percentiles (usec): 00:22:00.124 | 1.00th=[ 5800], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9503], 00:22:00.124 | 30.00th=[10552], 40.00th=[11731], 50.00th=[12518], 60.00th=[13173], 00:22:00.124 | 70.00th=[13960], 80.00th=[14746], 90.00th=[15926], 95.00th=[19792], 00:22:00.124 | 99.00th=[52691], 99.50th=[54264], 99.90th=[56886], 99.95th=[87557], 00:22:00.124 | 99.99th=[87557] 00:22:00.124 bw ( KiB/s): min=17664, max=33280, per=34.20%, avg=27346.90, stdev=5109.99, samples=10 00:22:00.124 iops : min= 138, max= 260, avg=213.60, stdev=39.89, samples=10 00:22:00.124 lat (msec) : 10=24.88%, 20=70.16%, 50=1.03%, 100=3.93% 00:22:00.124 cpu : usr=90.70%, sys=8.86%, ctx=14, majf=0, minf=140 00:22:00.124 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:00.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.124 issued rwts: total=1069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.124 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:00.124 filename0: (groupid=0, jobs=1): err= 0: pid=1778725: Thu Apr 18 17:09:14 2024 00:22:00.124 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(126MiB/5002msec) 00:22:00.124 slat (usec): min=4, max=108, avg=13.50, stdev= 4.95 00:22:00.124 clat (usec): min=5697, max=57200, avg=14926.76, stdev=10711.97 00:22:00.124 lat (usec): min=5709, max=57227, avg=14940.25, stdev=10712.23 00:22:00.124 clat percentiles (usec): 00:22:00.124 | 1.00th=[ 6128], 5.00th=[ 8029], 10.00th=[ 8848], 20.00th=[ 9896], 00:22:00.124 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12387], 60.00th=[13042], 00:22:00.124 | 70.00th=[13698], 80.00th=[14615], 90.00th=[16319], 95.00th=[51119], 00:22:00.124 | 99.00th=[54789], 99.50th=[55313], 99.90th=[56361], 99.95th=[57410], 00:22:00.124 | 99.99th=[57410] 00:22:00.124 bw ( KiB/s): min=17152, max=34304, per=32.26%, avg=25799.11, stdev=5315.90, samples=9 00:22:00.124 iops : min= 134, max= 268, avg=201.56, stdev=41.53, samples=9 00:22:00.124 lat (msec) : 10=20.62%, 20=72.21%, 50=1.20%, 100=5.98% 00:22:00.124 cpu : usr=90.66%, sys=8.92%, ctx=11, majf=0, minf=96 00:22:00.124 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:00.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.124 issued rwts: total=1004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.124 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:00.124 filename0: (groupid=0, jobs=1): err= 0: pid=1778726: Thu Apr 18 17:09:14 2024 00:22:00.124 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(135MiB/5044msec) 00:22:00.124 slat (nsec): min=4705, max=48737, avg=14668.60, stdev=5722.52 00:22:00.124 clat (usec): min=4845, max=57935, avg=13978.17, stdev=9189.25 00:22:00.124 lat (usec): min=4856, max=57971, avg=13992.83, stdev=9188.88 00:22:00.124 clat percentiles (usec): 00:22:00.124 | 1.00th=[ 5276], 5.00th=[ 5866], 10.00th=[ 7898], 20.00th=[ 9241], 00:22:00.124 | 30.00th=[10421], 40.00th=[11469], 50.00th=[12387], 60.00th=[13304], 00:22:00.124 | 70.00th=[14353], 80.00th=[15533], 90.00th=[16909], 95.00th=[20055], 00:22:00.124 | 99.00th=[54264], 99.50th=[55313], 99.90th=[56886], 99.95th=[57934], 00:22:00.124 | 99.99th=[57934] 00:22:00.124 bw ( KiB/s): min=16128, max=34304, per=34.45%, avg=27545.60, stdev=5535.76, samples=10 00:22:00.124 iops : min= 126, max= 268, avg=215.20, stdev=43.25, samples=10 00:22:00.124 lat (msec) : 10=27.27%, 20=67.72%, 50=1.48%, 100=3.53% 00:22:00.124 cpu : usr=90.38%, sys=8.76%, ctx=49, majf=0, minf=84 00:22:00.124 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:00.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.125 issued rwts: total=1078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.125 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:00.125 00:22:00.125 Run status group 0 (all jobs): 00:22:00.125 READ: bw=78.1MiB/s (81.9MB/s), 25.1MiB/s-26.7MiB/s (26.3MB/s-28.0MB/s), io=394MiB (413MB), run=5002-5044msec 00:22:00.125 17:09:14 -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:00.125 17:09:14 -- target/dif.sh@43 -- # local sub 00:22:00.125 17:09:14 -- target/dif.sh@45 -- # for sub in "$@" 00:22:00.125 17:09:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:00.125 17:09:14 -- target/dif.sh@36 -- # local sub_id=0 00:22:00.125 17:09:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@109 -- # NULL_DIF=2 00:22:00.125 17:09:14 -- target/dif.sh@109 -- # bs=4k 00:22:00.125 17:09:14 -- target/dif.sh@109 -- # numjobs=8 00:22:00.125 17:09:14 -- target/dif.sh@109 -- # iodepth=16 00:22:00.125 17:09:14 -- target/dif.sh@109 -- # runtime= 00:22:00.125 17:09:14 -- target/dif.sh@109 -- # files=2 00:22:00.125 17:09:14 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:00.125 17:09:14 -- target/dif.sh@28 -- # local sub 00:22:00.125 17:09:14 -- target/dif.sh@30 -- # for sub in "$@" 00:22:00.125 17:09:14 -- target/dif.sh@31 -- # create_subsystem 0 00:22:00.125 17:09:14 -- target/dif.sh@18 -- # local sub_id=0 00:22:00.125 17:09:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 bdev_null0 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 [2024-04-18 17:09:14.881472] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@30 -- # for sub in "$@" 00:22:00.125 17:09:14 -- target/dif.sh@31 -- # create_subsystem 1 00:22:00.125 17:09:14 -- target/dif.sh@18 -- # local sub_id=1 00:22:00.125 17:09:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 bdev_null1 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@30 -- # for sub in "$@" 00:22:00.125 17:09:14 -- target/dif.sh@31 -- # create_subsystem 2 00:22:00.125 17:09:14 -- target/dif.sh@18 -- # local sub_id=2 00:22:00.125 17:09:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 bdev_null2 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:00.125 17:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.125 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:00.125 17:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.125 17:09:14 -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:00.125 17:09:14 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:00.125 17:09:14 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:00.125 17:09:14 -- nvmf/common.sh@521 -- # config=() 00:22:00.125 17:09:14 -- nvmf/common.sh@521 -- # local subsystem config 00:22:00.125 17:09:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.125 17:09:14 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:00.125 17:09:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.125 { 00:22:00.125 "params": { 00:22:00.125 "name": "Nvme$subsystem", 00:22:00.125 "trtype": "$TEST_TRANSPORT", 00:22:00.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.125 "adrfam": "ipv4", 00:22:00.125 "trsvcid": "$NVMF_PORT", 00:22:00.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.125 "hdgst": ${hdgst:-false}, 00:22:00.125 "ddgst": ${ddgst:-false} 00:22:00.125 }, 00:22:00.125 "method": "bdev_nvme_attach_controller" 00:22:00.125 } 00:22:00.125 EOF 00:22:00.125 )") 00:22:00.125 17:09:14 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:00.125 17:09:14 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:00.125 17:09:14 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:00.125 17:09:14 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:00.125 17:09:14 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:00.125 17:09:14 -- common/autotest_common.sh@1327 -- # shift 00:22:00.125 17:09:14 -- target/dif.sh@82 -- # gen_fio_conf 00:22:00.125 17:09:14 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:00.125 17:09:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.125 17:09:14 -- target/dif.sh@54 -- # local file 00:22:00.125 17:09:14 -- target/dif.sh@56 -- # cat 00:22:00.125 17:09:14 -- nvmf/common.sh@543 -- # cat 00:22:00.125 17:09:14 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:00.125 17:09:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:00.125 17:09:14 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:00.125 17:09:14 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:00.125 17:09:14 -- target/dif.sh@72 -- # (( file <= files )) 00:22:00.125 17:09:14 -- target/dif.sh@73 -- # cat 00:22:00.125 17:09:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.125 17:09:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.125 { 00:22:00.125 "params": { 00:22:00.125 "name": "Nvme$subsystem", 00:22:00.125 "trtype": "$TEST_TRANSPORT", 00:22:00.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.125 "adrfam": "ipv4", 00:22:00.125 "trsvcid": "$NVMF_PORT", 00:22:00.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.125 "hdgst": ${hdgst:-false}, 00:22:00.125 "ddgst": ${ddgst:-false} 00:22:00.125 }, 00:22:00.125 "method": "bdev_nvme_attach_controller" 00:22:00.125 } 00:22:00.125 EOF 00:22:00.125 )") 00:22:00.125 17:09:14 -- nvmf/common.sh@543 -- # cat 00:22:00.125 17:09:14 -- target/dif.sh@72 -- # (( file++ )) 00:22:00.125 17:09:14 -- target/dif.sh@72 -- # (( file <= files )) 00:22:00.125 17:09:14 -- target/dif.sh@73 -- # cat 00:22:00.125 17:09:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.125 17:09:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.125 { 00:22:00.125 "params": { 00:22:00.125 "name": "Nvme$subsystem", 00:22:00.125 "trtype": "$TEST_TRANSPORT", 00:22:00.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.125 "adrfam": "ipv4", 00:22:00.125 "trsvcid": "$NVMF_PORT", 00:22:00.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.125 "hdgst": ${hdgst:-false}, 00:22:00.125 "ddgst": ${ddgst:-false} 00:22:00.125 }, 00:22:00.125 "method": "bdev_nvme_attach_controller" 00:22:00.125 } 00:22:00.125 EOF 00:22:00.125 )") 00:22:00.125 17:09:14 -- target/dif.sh@72 -- # (( file++ )) 00:22:00.125 17:09:14 -- target/dif.sh@72 -- # (( file <= files )) 00:22:00.125 17:09:14 -- nvmf/common.sh@543 -- # cat 00:22:00.125 17:09:14 -- nvmf/common.sh@545 -- # jq . 00:22:00.125 17:09:14 -- nvmf/common.sh@546 -- # IFS=, 00:22:00.126 17:09:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:00.126 "params": { 00:22:00.126 "name": "Nvme0", 00:22:00.126 "trtype": "tcp", 00:22:00.126 "traddr": "10.0.0.2", 00:22:00.126 "adrfam": "ipv4", 00:22:00.126 "trsvcid": "4420", 00:22:00.126 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:00.126 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:00.126 "hdgst": false, 00:22:00.126 "ddgst": false 00:22:00.126 }, 00:22:00.126 "method": "bdev_nvme_attach_controller" 00:22:00.126 },{ 00:22:00.126 "params": { 00:22:00.126 "name": "Nvme1", 00:22:00.126 "trtype": "tcp", 00:22:00.126 "traddr": "10.0.0.2", 00:22:00.126 "adrfam": "ipv4", 00:22:00.126 "trsvcid": "4420", 00:22:00.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.126 "hdgst": false, 00:22:00.126 "ddgst": false 00:22:00.126 }, 00:22:00.126 "method": "bdev_nvme_attach_controller" 00:22:00.126 },{ 00:22:00.126 "params": { 00:22:00.126 "name": "Nvme2", 00:22:00.126 "trtype": "tcp", 00:22:00.126 "traddr": "10.0.0.2", 00:22:00.126 "adrfam": "ipv4", 00:22:00.126 "trsvcid": "4420", 00:22:00.126 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:00.126 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:00.126 "hdgst": false, 00:22:00.126 "ddgst": false 00:22:00.126 }, 00:22:00.126 "method": "bdev_nvme_attach_controller" 00:22:00.126 }' 00:22:00.126 17:09:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:00.126 17:09:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:00.126 17:09:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.126 17:09:14 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:00.126 17:09:14 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:00.126 17:09:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:00.126 17:09:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:00.126 17:09:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:00.126 17:09:14 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:00.126 17:09:14 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:00.126 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:00.126 ... 00:22:00.126 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:00.126 ... 00:22:00.126 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:00.126 ... 00:22:00.126 fio-3.35 00:22:00.126 Starting 24 threads 00:22:00.126 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.330 00:22:12.330 filename0: (groupid=0, jobs=1): err= 0: pid=1779588: Thu Apr 18 17:09:26 2024 00:22:12.330 read: IOPS=368, BW=1474KiB/s (1510kB/s)(14.5MiB/10071msec) 00:22:12.330 slat (usec): min=8, max=114, avg=62.45, stdev=24.42 00:22:12.330 clat (msec): min=26, max=373, avg=42.84, stdev=44.88 00:22:12.330 lat (msec): min=26, max=373, avg=42.91, stdev=44.88 00:22:12.330 clat percentiles (msec): 00:22:12.330 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:12.330 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.330 | 70.00th=[ 37], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 43], 00:22:12.330 | 99.00th=[ 309], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 376], 00:22:12.330 | 99.99th=[ 376] 00:22:12.330 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=1478.40, stdev=631.83, samples=20 00:22:12.330 iops : min= 32, max= 480, avg=369.60, stdev=157.96, samples=20 00:22:12.330 lat (msec) : 50=96.98%, 100=0.43%, 250=0.43%, 500=2.16% 00:22:12.330 cpu : usr=98.24%, sys=1.32%, ctx=16, majf=0, minf=73 00:22:12.330 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:12.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.330 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.330 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.330 filename0: (groupid=0, jobs=1): err= 0: pid=1779589: Thu Apr 18 17:09:26 2024 00:22:12.330 read: IOPS=373, BW=1493KiB/s (1529kB/s)(14.7MiB/10075msec) 00:22:12.330 slat (nsec): min=8025, max=74005, avg=32619.81, stdev=9979.57 00:22:12.330 clat (msec): min=30, max=401, avg=42.56, stdev=37.23 00:22:12.330 lat (msec): min=30, max=401, avg=42.60, stdev=37.23 00:22:12.330 clat percentiles (msec): 00:22:12.330 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.330 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.330 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.330 | 99.00th=[ 253], 99.50th=[ 275], 99.90th=[ 338], 99.95th=[ 401], 00:22:12.330 | 99.99th=[ 401] 00:22:12.330 bw ( KiB/s): min= 192, max= 1920, per=4.21%, avg=1497.60, stdev=600.27, samples=20 00:22:12.330 iops : min= 48, max= 480, avg=374.40, stdev=150.07, samples=20 00:22:12.330 lat (msec) : 50=96.49%, 100=0.11%, 250=2.18%, 500=1.22% 00:22:12.330 cpu : usr=94.12%, sys=3.38%, ctx=132, majf=0, minf=79 00:22:12.330 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:22:12.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.330 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.330 issued rwts: total=3760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.330 filename0: (groupid=0, jobs=1): err= 0: pid=1779590: Thu Apr 18 17:09:26 2024 00:22:12.330 read: IOPS=380, BW=1521KiB/s (1557kB/s)(15.0MiB/10089msec) 00:22:12.330 slat (usec): min=7, max=125, avg=36.51, stdev=20.05 00:22:12.330 clat (msec): min=9, max=309, avg=41.76, stdev=36.14 00:22:12.330 lat (msec): min=9, max=309, avg=41.79, stdev=36.13 00:22:12.330 clat percentiles (msec): 00:22:12.330 | 1.00th=[ 22], 5.00th=[ 30], 10.00th=[ 33], 20.00th=[ 33], 00:22:12.330 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.330 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.330 | 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 309], 00:22:12.330 | 99.99th=[ 309] 00:22:12.330 bw ( KiB/s): min= 240, max= 2272, per=4.29%, avg=1528.40, stdev=620.22, samples=20 00:22:12.330 iops : min= 60, max= 568, avg=382.00, stdev=155.04, samples=20 00:22:12.330 lat (msec) : 10=0.36%, 20=0.63%, 50=95.26%, 100=0.42%, 250=2.09% 00:22:12.330 lat (msec) : 500=1.25% 00:22:12.330 cpu : usr=98.00%, sys=1.41%, ctx=113, majf=0, minf=110 00:22:12.330 IO depths : 1=5.6%, 2=11.5%, 4=23.9%, 8=52.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:22:12.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 issued rwts: total=3836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.331 filename0: (groupid=0, jobs=1): err= 0: pid=1779591: Thu Apr 18 17:09:26 2024 00:22:12.331 read: IOPS=368, BW=1475KiB/s (1510kB/s)(14.5MiB/10066msec) 00:22:12.331 slat (nsec): min=8349, max=61418, avg=30507.68, stdev=9015.67 00:22:12.331 clat (msec): min=32, max=411, avg=43.10, stdev=44.91 00:22:12.331 lat (msec): min=32, max=411, avg=43.13, stdev=44.91 00:22:12.331 clat percentiles (msec): 00:22:12.331 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.331 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.331 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 43], 00:22:12.331 | 99.00th=[ 309], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 414], 00:22:12.331 | 99.99th=[ 414] 00:22:12.331 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=1478.40, stdev=631.83, samples=20 00:22:12.331 iops : min= 32, max= 480, avg=369.60, stdev=157.96, samples=20 00:22:12.331 lat (msec) : 50=96.98%, 100=0.43%, 250=0.43%, 500=2.16% 00:22:12.331 cpu : usr=98.29%, sys=1.27%, ctx=16, majf=0, minf=60 00:22:12.331 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:12.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.331 filename0: (groupid=0, jobs=1): err= 0: pid=1779592: Thu Apr 18 17:09:26 2024 00:22:12.331 read: IOPS=369, BW=1476KiB/s (1512kB/s)(14.5MiB/10057msec) 00:22:12.331 slat (usec): min=8, max=101, avg=35.84, stdev=12.91 00:22:12.331 clat (msec): min=31, max=517, avg=43.01, stdev=45.44 00:22:12.331 lat (msec): min=31, max=517, avg=43.05, stdev=45.45 00:22:12.331 clat percentiles (msec): 00:22:12.331 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.331 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.331 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 43], 00:22:12.331 | 99.00th=[ 300], 99.50th=[ 380], 99.90th=[ 502], 99.95th=[ 518], 00:22:12.331 | 99.99th=[ 518] 00:22:12.331 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=1478.55, stdev=630.39, samples=20 00:22:12.331 iops : min= 32, max= 480, avg=369.60, stdev=157.58, samples=20 00:22:12.331 lat (msec) : 50=96.98%, 100=0.43%, 250=0.54%, 500=1.99%, 750=0.05% 00:22:12.331 cpu : usr=97.78%, sys=1.67%, ctx=70, majf=0, minf=73 00:22:12.331 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:12.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.331 filename0: (groupid=0, jobs=1): err= 0: pid=1779593: Thu Apr 18 17:09:26 2024 00:22:12.331 read: IOPS=373, BW=1493KiB/s (1529kB/s)(14.7MiB/10075msec) 00:22:12.331 slat (usec): min=8, max=115, avg=43.26, stdev=22.96 00:22:12.331 clat (msec): min=31, max=355, avg=42.43, stdev=36.30 00:22:12.331 lat (msec): min=31, max=355, avg=42.47, stdev=36.29 00:22:12.331 clat percentiles (msec): 00:22:12.331 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:12.331 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.331 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.331 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 355], 00:22:12.331 | 99.99th=[ 355] 00:22:12.331 bw ( KiB/s): min= 240, max= 1920, per=4.21%, avg=1497.60, stdev=604.83, samples=20 00:22:12.331 iops : min= 60, max= 480, avg=374.40, stdev=151.21, samples=20 00:22:12.331 lat (msec) : 50=96.17%, 100=0.43%, 250=2.34%, 500=1.06% 00:22:12.331 cpu : usr=97.30%, sys=1.78%, ctx=86, majf=0, minf=52 00:22:12.331 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:12.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 issued rwts: total=3760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.331 filename0: (groupid=0, jobs=1): err= 0: pid=1779594: Thu Apr 18 17:09:26 2024 00:22:12.331 read: IOPS=371, BW=1487KiB/s (1523kB/s)(14.6MiB/10070msec) 00:22:12.331 slat (nsec): min=8063, max=61627, avg=27082.90, stdev=9456.81 00:22:12.331 clat (msec): min=32, max=274, avg=42.57, stdev=35.62 00:22:12.331 lat (msec): min=32, max=274, avg=42.59, stdev=35.62 00:22:12.331 clat percentiles (msec): 00:22:12.331 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.331 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.331 | 70.00th=[ 37], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.331 | 99.00th=[ 243], 99.50th=[ 259], 99.90th=[ 264], 99.95th=[ 275], 00:22:12.331 | 99.99th=[ 275] 00:22:12.331 bw ( KiB/s): min= 256, max= 1920, per=4.19%, avg=1491.35, stdev=603.68, samples=20 00:22:12.331 iops : min= 64, max= 480, avg=372.80, stdev=150.91, samples=20 00:22:12.331 lat (msec) : 50=96.10%, 100=0.48%, 250=2.56%, 500=0.85% 00:22:12.331 cpu : usr=96.93%, sys=2.12%, ctx=67, majf=0, minf=66 00:22:12.331 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:12.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 issued rwts: total=3744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.331 filename0: (groupid=0, jobs=1): err= 0: pid=1779595: Thu Apr 18 17:09:26 2024 00:22:12.331 read: IOPS=369, BW=1476KiB/s (1512kB/s)(14.5MiB/10057msec) 00:22:12.331 slat (usec): min=8, max=100, avg=36.12, stdev=11.80 00:22:12.331 clat (msec): min=32, max=423, avg=43.03, stdev=44.63 00:22:12.331 lat (msec): min=32, max=423, avg=43.07, stdev=44.63 00:22:12.331 clat percentiles (msec): 00:22:12.331 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.331 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.331 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.331 | 99.00th=[ 305], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 426], 00:22:12.331 | 99.99th=[ 426] 00:22:12.331 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=1478.40, stdev=631.00, samples=20 00:22:12.331 iops : min= 32, max= 480, avg=369.60, stdev=157.75, samples=20 00:22:12.331 lat (msec) : 50=96.98%, 100=0.43%, 250=0.43%, 500=2.16% 00:22:12.331 cpu : usr=95.83%, sys=2.60%, ctx=107, majf=0, minf=72 00:22:12.331 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:12.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.331 filename1: (groupid=0, jobs=1): err= 0: pid=1779596: Thu Apr 18 17:09:26 2024 00:22:12.331 read: IOPS=373, BW=1493KiB/s (1529kB/s)(14.7MiB/10075msec) 00:22:12.331 slat (usec): min=8, max=114, avg=34.54, stdev=18.79 00:22:12.331 clat (msec): min=31, max=284, avg=42.55, stdev=36.20 00:22:12.331 lat (msec): min=31, max=284, avg=42.58, stdev=36.20 00:22:12.331 clat percentiles (msec): 00:22:12.331 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.331 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.331 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.331 | 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 284], 00:22:12.331 | 99.99th=[ 284] 00:22:12.331 bw ( KiB/s): min= 256, max= 1920, per=4.21%, avg=1497.60, stdev=604.81, samples=20 00:22:12.331 iops : min= 64, max= 480, avg=374.40, stdev=151.20, samples=20 00:22:12.331 lat (msec) : 50=96.17%, 100=0.43%, 250=2.13%, 500=1.28% 00:22:12.331 cpu : usr=98.35%, sys=1.24%, ctx=18, majf=0, minf=74 00:22:12.331 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:12.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 issued rwts: total=3760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.331 filename1: (groupid=0, jobs=1): err= 0: pid=1779597: Thu Apr 18 17:09:26 2024 00:22:12.331 read: IOPS=373, BW=1494KiB/s (1529kB/s)(14.7MiB/10070msec) 00:22:12.331 slat (usec): min=8, max=105, avg=34.23, stdev=10.97 00:22:12.331 clat (msec): min=32, max=276, avg=42.55, stdev=36.81 00:22:12.331 lat (msec): min=32, max=276, avg=42.58, stdev=36.80 00:22:12.331 clat percentiles (msec): 00:22:12.331 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.331 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.331 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.331 | 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 275], 00:22:12.331 | 99.99th=[ 275] 00:22:12.331 bw ( KiB/s): min= 256, max= 1920, per=4.21%, avg=1497.75, stdev=600.59, samples=20 00:22:12.331 iops : min= 64, max= 480, avg=374.40, stdev=150.13, samples=20 00:22:12.331 lat (msec) : 50=96.60%, 250=2.13%, 500=1.28% 00:22:12.331 cpu : usr=97.41%, sys=1.73%, ctx=84, majf=0, minf=56 00:22:12.331 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:12.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.331 issued rwts: total=3760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.331 filename1: (groupid=0, jobs=1): err= 0: pid=1779598: Thu Apr 18 17:09:26 2024 00:22:12.331 read: IOPS=369, BW=1477KiB/s (1512kB/s)(14.5MiB/10053msec) 00:22:12.331 slat (usec): min=8, max=116, avg=43.95, stdev=20.66 00:22:12.331 clat (msec): min=31, max=409, avg=42.94, stdev=44.57 00:22:12.331 lat (msec): min=31, max=410, avg=42.98, stdev=44.57 00:22:12.331 clat percentiles (msec): 00:22:12.331 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:12.331 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.331 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.331 | 99.00th=[ 305], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 409], 00:22:12.331 | 99.99th=[ 409] 00:22:12.331 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=1478.55, stdev=630.54, samples=20 00:22:12.331 iops : min= 32, max= 480, avg=369.60, stdev=157.62, samples=20 00:22:12.332 lat (msec) : 50=96.98%, 100=0.43%, 250=0.43%, 500=2.16% 00:22:12.332 cpu : usr=96.08%, sys=2.29%, ctx=119, majf=0, minf=72 00:22:12.332 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:12.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.332 filename1: (groupid=0, jobs=1): err= 0: pid=1779599: Thu Apr 18 17:09:26 2024 00:22:12.332 read: IOPS=368, BW=1476KiB/s (1511kB/s)(14.5MiB/10063msec) 00:22:12.332 slat (usec): min=11, max=103, avg=35.34, stdev=14.63 00:22:12.332 clat (msec): min=27, max=373, avg=43.07, stdev=44.88 00:22:12.332 lat (msec): min=27, max=373, avg=43.10, stdev=44.88 00:22:12.332 clat percentiles (msec): 00:22:12.332 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.332 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.332 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 43], 00:22:12.332 | 99.00th=[ 309], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 376], 00:22:12.332 | 99.99th=[ 376] 00:22:12.332 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=1478.40, stdev=630.49, samples=20 00:22:12.332 iops : min= 32, max= 480, avg=369.60, stdev=157.62, samples=20 00:22:12.332 lat (msec) : 50=96.98%, 100=0.43%, 250=0.43%, 500=2.16% 00:22:12.332 cpu : usr=96.95%, sys=2.02%, ctx=82, majf=0, minf=59 00:22:12.332 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:22:12.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.332 filename1: (groupid=0, jobs=1): err= 0: pid=1779600: Thu Apr 18 17:09:26 2024 00:22:12.332 read: IOPS=371, BW=1486KiB/s (1522kB/s)(14.6MiB/10075msec) 00:22:12.332 slat (nsec): min=8066, max=95946, avg=20711.34, stdev=10597.92 00:22:12.332 clat (msec): min=26, max=445, avg=42.87, stdev=37.49 00:22:12.332 lat (msec): min=26, max=445, avg=42.89, stdev=37.49 00:22:12.332 clat percentiles (msec): 00:22:12.332 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:22:12.332 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.332 | 70.00th=[ 37], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.332 | 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 447], 00:22:12.332 | 99.99th=[ 447] 00:22:12.332 bw ( KiB/s): min= 256, max= 1920, per=4.19%, avg=1491.35, stdev=603.68, samples=20 00:22:12.332 iops : min= 64, max= 480, avg=372.80, stdev=150.91, samples=20 00:22:12.332 lat (msec) : 50=96.15%, 100=0.43%, 250=2.19%, 500=1.23% 00:22:12.332 cpu : usr=98.29%, sys=1.28%, ctx=26, majf=0, minf=88 00:22:12.332 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:12.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 issued rwts: total=3744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.332 filename1: (groupid=0, jobs=1): err= 0: pid=1779601: Thu Apr 18 17:09:26 2024 00:22:12.332 read: IOPS=369, BW=1476KiB/s (1512kB/s)(14.5MiB/10058msec) 00:22:12.332 slat (usec): min=11, max=113, avg=39.50, stdev=18.51 00:22:12.332 clat (msec): min=31, max=411, avg=43.00, stdev=44.58 00:22:12.332 lat (msec): min=31, max=411, avg=43.04, stdev=44.57 00:22:12.332 clat percentiles (msec): 00:22:12.332 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:12.332 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.332 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.332 | 99.00th=[ 305], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 414], 00:22:12.332 | 99.99th=[ 414] 00:22:12.332 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=1478.40, stdev=631.83, samples=20 00:22:12.332 iops : min= 32, max= 480, avg=369.60, stdev=157.96, samples=20 00:22:12.332 lat (msec) : 50=97.17%, 100=0.24%, 250=0.43%, 500=2.16% 00:22:12.332 cpu : usr=98.34%, sys=1.25%, ctx=19, majf=0, minf=72 00:22:12.332 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:12.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.332 filename1: (groupid=0, jobs=1): err= 0: pid=1779602: Thu Apr 18 17:09:26 2024 00:22:12.332 read: IOPS=376, BW=1504KiB/s (1540kB/s)(14.8MiB/10084msec) 00:22:12.332 slat (nsec): min=6271, max=49679, avg=14847.16, stdev=8955.85 00:22:12.332 clat (msec): min=9, max=276, avg=42.42, stdev=36.10 00:22:12.332 lat (msec): min=9, max=276, avg=42.44, stdev=36.10 00:22:12.332 clat percentiles (msec): 00:22:12.332 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:22:12.332 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.332 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.332 | 99.00th=[ 264], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 275], 00:22:12.332 | 99.99th=[ 275] 00:22:12.332 bw ( KiB/s): min= 256, max= 1923, per=4.24%, avg=1510.80, stdev=602.76, samples=20 00:22:12.332 iops : min= 64, max= 480, avg=377.60, stdev=150.67, samples=20 00:22:12.332 lat (msec) : 10=0.18%, 20=0.47%, 50=95.54%, 100=0.42%, 250=2.11% 00:22:12.332 lat (msec) : 500=1.27% 00:22:12.332 cpu : usr=98.48%, sys=1.11%, ctx=19, majf=0, minf=80 00:22:12.332 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:12.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 issued rwts: total=3792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.332 filename1: (groupid=0, jobs=1): err= 0: pid=1779603: Thu Apr 18 17:09:26 2024 00:22:12.332 read: IOPS=372, BW=1492KiB/s (1528kB/s)(14.7MiB/10082msec) 00:22:12.332 slat (nsec): min=4494, max=84034, avg=27146.03, stdev=13792.09 00:22:12.332 clat (msec): min=32, max=276, avg=42.67, stdev=36.80 00:22:12.332 lat (msec): min=32, max=276, avg=42.70, stdev=36.80 00:22:12.332 clat percentiles (msec): 00:22:12.332 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.332 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.332 | 70.00th=[ 38], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.332 | 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 275], 00:22:12.332 | 99.99th=[ 275] 00:22:12.332 bw ( KiB/s): min= 256, max= 1920, per=4.21%, avg=1497.60, stdev=604.81, samples=20 00:22:12.332 iops : min= 64, max= 480, avg=374.40, stdev=151.20, samples=20 00:22:12.332 lat (msec) : 50=96.60%, 250=2.13%, 500=1.28% 00:22:12.332 cpu : usr=96.37%, sys=2.33%, ctx=75, majf=0, minf=84 00:22:12.332 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:12.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 issued rwts: total=3760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.332 filename2: (groupid=0, jobs=1): err= 0: pid=1779604: Thu Apr 18 17:09:26 2024 00:22:12.332 read: IOPS=368, BW=1474KiB/s (1509kB/s)(14.5MiB/10075msec) 00:22:12.332 slat (usec): min=8, max=112, avg=33.86, stdev=21.27 00:22:12.332 clat (msec): min=26, max=415, avg=43.13, stdev=45.01 00:22:12.332 lat (msec): min=26, max=415, avg=43.16, stdev=45.01 00:22:12.332 clat percentiles (msec): 00:22:12.332 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.332 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.332 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.332 | 99.00th=[ 309], 99.50th=[ 368], 99.90th=[ 414], 99.95th=[ 418], 00:22:12.332 | 99.99th=[ 418] 00:22:12.332 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=1478.40, stdev=631.68, samples=20 00:22:12.332 iops : min= 32, max= 480, avg=369.60, stdev=157.92, samples=20 00:22:12.332 lat (msec) : 50=96.98%, 100=0.43%, 250=0.43%, 500=2.16% 00:22:12.332 cpu : usr=98.02%, sys=1.47%, ctx=34, majf=0, minf=72 00:22:12.332 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:12.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.332 filename2: (groupid=0, jobs=1): err= 0: pid=1779605: Thu Apr 18 17:09:26 2024 00:22:12.332 read: IOPS=373, BW=1493KiB/s (1529kB/s)(14.7MiB/10075msec) 00:22:12.332 slat (nsec): min=8160, max=88826, avg=25968.78, stdev=12147.26 00:22:12.332 clat (msec): min=23, max=373, avg=42.62, stdev=37.09 00:22:12.332 lat (msec): min=23, max=373, avg=42.65, stdev=37.09 00:22:12.332 clat percentiles (msec): 00:22:12.332 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:22:12.332 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.332 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.332 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 296], 99.95th=[ 372], 00:22:12.332 | 99.99th=[ 372] 00:22:12.332 bw ( KiB/s): min= 240, max= 1920, per=4.21%, avg=1497.60, stdev=600.54, samples=20 00:22:12.332 iops : min= 60, max= 480, avg=374.40, stdev=150.13, samples=20 00:22:12.332 lat (msec) : 50=96.60%, 250=2.34%, 500=1.06% 00:22:12.332 cpu : usr=96.18%, sys=2.63%, ctx=258, majf=0, minf=93 00:22:12.332 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:12.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.332 issued rwts: total=3760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.332 filename2: (groupid=0, jobs=1): err= 0: pid=1779606: Thu Apr 18 17:09:26 2024 00:22:12.332 read: IOPS=373, BW=1494KiB/s (1529kB/s)(14.7MiB/10070msec) 00:22:12.332 slat (usec): min=8, max=105, avg=36.43, stdev=20.88 00:22:12.332 clat (msec): min=31, max=276, avg=42.52, stdev=36.18 00:22:12.332 lat (msec): min=31, max=276, avg=42.56, stdev=36.18 00:22:12.332 clat percentiles (msec): 00:22:12.332 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.332 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.333 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.333 | 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 275], 00:22:12.333 | 99.99th=[ 275] 00:22:12.333 bw ( KiB/s): min= 256, max= 1920, per=4.21%, avg=1497.60, stdev=604.81, samples=20 00:22:12.333 iops : min= 64, max= 480, avg=374.40, stdev=151.20, samples=20 00:22:12.333 lat (msec) : 50=96.17%, 100=0.43%, 250=2.13%, 500=1.28% 00:22:12.333 cpu : usr=98.03%, sys=1.55%, ctx=23, majf=0, minf=57 00:22:12.333 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:12.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.333 issued rwts: total=3760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.333 filename2: (groupid=0, jobs=1): err= 0: pid=1779607: Thu Apr 18 17:09:26 2024 00:22:12.333 read: IOPS=374, BW=1497KiB/s (1533kB/s)(14.8MiB/10089msec) 00:22:12.333 slat (usec): min=6, max=482, avg=34.21, stdev=28.98 00:22:12.333 clat (msec): min=16, max=341, avg=42.42, stdev=37.21 00:22:12.333 lat (msec): min=16, max=341, avg=42.45, stdev=37.20 00:22:12.333 clat percentiles (msec): 00:22:12.333 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.333 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.333 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 44], 00:22:12.333 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 342], 99.95th=[ 342], 00:22:12.333 | 99.99th=[ 342] 00:22:12.333 bw ( KiB/s): min= 240, max= 1923, per=4.23%, avg=1504.35, stdev=598.85, samples=20 00:22:12.333 iops : min= 60, max= 480, avg=376.00, stdev=149.69, samples=20 00:22:12.333 lat (msec) : 20=0.42%, 50=96.19%, 250=2.28%, 500=1.11% 00:22:12.333 cpu : usr=97.77%, sys=1.78%, ctx=41, majf=0, minf=131 00:22:12.333 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:12.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.333 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.333 issued rwts: total=3776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.333 filename2: (groupid=0, jobs=1): err= 0: pid=1779608: Thu Apr 18 17:09:26 2024 00:22:12.333 read: IOPS=369, BW=1476KiB/s (1512kB/s)(14.5MiB/10057msec) 00:22:12.333 slat (nsec): min=8011, max=87784, avg=33543.04, stdev=10126.12 00:22:12.333 clat (msec): min=32, max=383, avg=43.03, stdev=44.85 00:22:12.333 lat (msec): min=32, max=383, avg=43.07, stdev=44.84 00:22:12.333 clat percentiles (msec): 00:22:12.333 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.333 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.333 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 43], 00:22:12.333 | 99.00th=[ 300], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:22:12.333 | 99.99th=[ 384] 00:22:12.333 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=1478.55, stdev=630.54, samples=20 00:22:12.333 iops : min= 32, max= 480, avg=369.60, stdev=157.62, samples=20 00:22:12.333 lat (msec) : 50=96.98%, 100=0.43%, 250=0.40%, 500=2.18% 00:22:12.333 cpu : usr=98.29%, sys=1.30%, ctx=16, majf=0, minf=75 00:22:12.333 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:12.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.333 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.333 filename2: (groupid=0, jobs=1): err= 0: pid=1779609: Thu Apr 18 17:09:26 2024 00:22:12.333 read: IOPS=369, BW=1476KiB/s (1512kB/s)(14.5MiB/10057msec) 00:22:12.333 slat (usec): min=8, max=116, avg=64.13, stdev=23.75 00:22:12.333 clat (msec): min=24, max=442, avg=42.78, stdev=44.79 00:22:12.333 lat (msec): min=24, max=442, avg=42.84, stdev=44.78 00:22:12.333 clat percentiles (msec): 00:22:12.333 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:12.333 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.333 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 43], 00:22:12.333 | 99.00th=[ 300], 99.50th=[ 376], 99.90th=[ 384], 99.95th=[ 443], 00:22:12.333 | 99.99th=[ 443] 00:22:12.333 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=1478.55, stdev=631.88, samples=20 00:22:12.333 iops : min= 32, max= 480, avg=369.60, stdev=157.96, samples=20 00:22:12.333 lat (msec) : 50=96.98%, 100=0.43%, 250=0.48%, 500=2.10% 00:22:12.333 cpu : usr=97.71%, sys=1.46%, ctx=63, majf=0, minf=75 00:22:12.333 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:22:12.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.333 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.333 filename2: (groupid=0, jobs=1): err= 0: pid=1779610: Thu Apr 18 17:09:26 2024 00:22:12.333 read: IOPS=368, BW=1475KiB/s (1510kB/s)(14.5MiB/10066msec) 00:22:12.333 slat (usec): min=6, max=107, avg=29.00, stdev=10.96 00:22:12.333 clat (msec): min=32, max=415, avg=43.12, stdev=44.93 00:22:12.333 lat (msec): min=32, max=415, avg=43.15, stdev=44.93 00:22:12.333 clat percentiles (msec): 00:22:12.333 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.333 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.333 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 43], 00:22:12.333 | 99.00th=[ 309], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 414], 00:22:12.333 | 99.99th=[ 414] 00:22:12.333 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=1478.40, stdev=631.83, samples=20 00:22:12.333 iops : min= 32, max= 480, avg=369.60, stdev=157.96, samples=20 00:22:12.333 lat (msec) : 50=96.98%, 100=0.43%, 250=0.43%, 500=2.16% 00:22:12.333 cpu : usr=97.33%, sys=1.82%, ctx=62, majf=0, minf=61 00:22:12.333 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:12.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.333 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.333 filename2: (groupid=0, jobs=1): err= 0: pid=1779611: Thu Apr 18 17:09:26 2024 00:22:12.333 read: IOPS=369, BW=1477KiB/s (1512kB/s)(14.5MiB/10054msec) 00:22:12.333 slat (nsec): min=8296, max=94447, avg=32664.06, stdev=8586.16 00:22:12.333 clat (msec): min=32, max=411, avg=43.04, stdev=44.57 00:22:12.333 lat (msec): min=32, max=411, avg=43.08, stdev=44.57 00:22:12.333 clat percentiles (msec): 00:22:12.333 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:12.333 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:12.333 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 43], 00:22:12.333 | 99.00th=[ 305], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 414], 00:22:12.333 | 99.99th=[ 414] 00:22:12.333 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=1478.40, stdev=630.46, samples=20 00:22:12.333 iops : min= 32, max= 480, avg=369.60, stdev=157.62, samples=20 00:22:12.333 lat (msec) : 50=96.98%, 100=0.43%, 250=0.43%, 500=2.16% 00:22:12.333 cpu : usr=98.29%, sys=1.32%, ctx=16, majf=0, minf=72 00:22:12.333 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:12.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.333 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:12.333 00:22:12.333 Run status group 0 (all jobs): 00:22:12.333 READ: bw=34.8MiB/s (36.4MB/s), 1474KiB/s-1521KiB/s (1509kB/s-1557kB/s), io=351MiB (368MB), run=10053-10089msec 00:22:12.333 17:09:26 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:12.333 17:09:26 -- target/dif.sh@43 -- # local sub 00:22:12.333 17:09:26 -- target/dif.sh@45 -- # for sub in "$@" 00:22:12.333 17:09:26 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:12.333 17:09:26 -- target/dif.sh@36 -- # local sub_id=0 00:22:12.333 17:09:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:12.333 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 17:09:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:12.333 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 17:09:26 -- target/dif.sh@45 -- # for sub in "$@" 00:22:12.333 17:09:26 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:12.333 17:09:26 -- target/dif.sh@36 -- # local sub_id=1 00:22:12.333 17:09:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:12.333 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 17:09:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:12.333 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 17:09:26 -- target/dif.sh@45 -- # for sub in "$@" 00:22:12.333 17:09:26 -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:12.333 17:09:26 -- target/dif.sh@36 -- # local sub_id=2 00:22:12.333 17:09:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:12.333 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 17:09:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:12.333 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.333 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.333 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.333 17:09:26 -- target/dif.sh@115 -- # NULL_DIF=1 00:22:12.333 17:09:26 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:12.333 17:09:26 -- target/dif.sh@115 -- # numjobs=2 00:22:12.333 17:09:26 -- target/dif.sh@115 -- # iodepth=8 00:22:12.333 17:09:26 -- target/dif.sh@115 -- # runtime=5 00:22:12.333 17:09:26 -- target/dif.sh@115 -- # files=1 00:22:12.333 17:09:26 -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:12.334 17:09:26 -- target/dif.sh@28 -- # local sub 00:22:12.334 17:09:26 -- target/dif.sh@30 -- # for sub in "$@" 00:22:12.334 17:09:26 -- target/dif.sh@31 -- # create_subsystem 0 00:22:12.334 17:09:26 -- target/dif.sh@18 -- # local sub_id=0 00:22:12.334 17:09:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:12.334 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 bdev_null0 00:22:12.334 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 17:09:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:12.334 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 17:09:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:12.334 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 17:09:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:12.334 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 [2024-04-18 17:09:26.643740] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.334 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 17:09:26 -- target/dif.sh@30 -- # for sub in "$@" 00:22:12.334 17:09:26 -- target/dif.sh@31 -- # create_subsystem 1 00:22:12.334 17:09:26 -- target/dif.sh@18 -- # local sub_id=1 00:22:12.334 17:09:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:12.334 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 bdev_null1 00:22:12.334 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 17:09:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:12.334 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 17:09:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:12.334 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 17:09:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.334 17:09:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.334 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.334 17:09:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.334 17:09:26 -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:12.334 17:09:26 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:12.334 17:09:26 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:12.334 17:09:26 -- nvmf/common.sh@521 -- # config=() 00:22:12.334 17:09:26 -- nvmf/common.sh@521 -- # local subsystem config 00:22:12.334 17:09:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.334 17:09:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.334 { 00:22:12.334 "params": { 00:22:12.334 "name": "Nvme$subsystem", 00:22:12.334 "trtype": "$TEST_TRANSPORT", 00:22:12.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.334 "adrfam": "ipv4", 00:22:12.334 "trsvcid": "$NVMF_PORT", 00:22:12.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.334 "hdgst": ${hdgst:-false}, 00:22:12.334 "ddgst": ${ddgst:-false} 00:22:12.334 }, 00:22:12.334 "method": "bdev_nvme_attach_controller" 00:22:12.334 } 00:22:12.334 EOF 00:22:12.334 )") 00:22:12.334 17:09:26 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:12.334 17:09:26 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:12.334 17:09:26 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:12.334 17:09:26 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:12.334 17:09:26 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:12.334 17:09:26 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:12.334 17:09:26 -- target/dif.sh@82 -- # gen_fio_conf 00:22:12.334 17:09:26 -- common/autotest_common.sh@1327 -- # shift 00:22:12.334 17:09:26 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:12.334 17:09:26 -- target/dif.sh@54 -- # local file 00:22:12.334 17:09:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:12.334 17:09:26 -- target/dif.sh@56 -- # cat 00:22:12.334 17:09:26 -- nvmf/common.sh@543 -- # cat 00:22:12.334 17:09:26 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:12.334 17:09:26 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:12.334 17:09:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:12.334 17:09:26 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:12.334 17:09:26 -- target/dif.sh@72 -- # (( file <= files )) 00:22:12.334 17:09:26 -- target/dif.sh@73 -- # cat 00:22:12.334 17:09:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.334 17:09:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.334 { 00:22:12.334 "params": { 00:22:12.334 "name": "Nvme$subsystem", 00:22:12.334 "trtype": "$TEST_TRANSPORT", 00:22:12.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.334 "adrfam": "ipv4", 00:22:12.334 "trsvcid": "$NVMF_PORT", 00:22:12.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.334 "hdgst": ${hdgst:-false}, 00:22:12.334 "ddgst": ${ddgst:-false} 00:22:12.334 }, 00:22:12.334 "method": "bdev_nvme_attach_controller" 00:22:12.334 } 00:22:12.334 EOF 00:22:12.334 )") 00:22:12.334 17:09:26 -- nvmf/common.sh@543 -- # cat 00:22:12.334 17:09:26 -- target/dif.sh@72 -- # (( file++ )) 00:22:12.334 17:09:26 -- target/dif.sh@72 -- # (( file <= files )) 00:22:12.334 17:09:26 -- nvmf/common.sh@545 -- # jq . 00:22:12.334 17:09:26 -- nvmf/common.sh@546 -- # IFS=, 00:22:12.334 17:09:26 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:12.334 "params": { 00:22:12.334 "name": "Nvme0", 00:22:12.334 "trtype": "tcp", 00:22:12.334 "traddr": "10.0.0.2", 00:22:12.334 "adrfam": "ipv4", 00:22:12.334 "trsvcid": "4420", 00:22:12.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:12.334 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:12.334 "hdgst": false, 00:22:12.334 "ddgst": false 00:22:12.334 }, 00:22:12.334 "method": "bdev_nvme_attach_controller" 00:22:12.334 },{ 00:22:12.334 "params": { 00:22:12.334 "name": "Nvme1", 00:22:12.334 "trtype": "tcp", 00:22:12.334 "traddr": "10.0.0.2", 00:22:12.334 "adrfam": "ipv4", 00:22:12.334 "trsvcid": "4420", 00:22:12.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.334 "hdgst": false, 00:22:12.334 "ddgst": false 00:22:12.334 }, 00:22:12.334 "method": "bdev_nvme_attach_controller" 00:22:12.334 }' 00:22:12.334 17:09:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:12.334 17:09:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:12.334 17:09:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:12.334 17:09:26 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:12.334 17:09:26 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:12.334 17:09:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:12.334 17:09:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:12.334 17:09:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:12.334 17:09:26 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:12.334 17:09:26 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:12.334 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:12.334 ... 00:22:12.334 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:12.334 ... 00:22:12.334 fio-3.35 00:22:12.334 Starting 4 threads 00:22:12.334 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.600 00:22:17.600 filename0: (groupid=0, jobs=1): err= 0: pid=1780993: Thu Apr 18 17:09:32 2024 00:22:17.600 read: IOPS=1827, BW=14.3MiB/s (15.0MB/s)(71.4MiB/5003msec) 00:22:17.600 slat (nsec): min=4103, max=73881, avg=20980.57, stdev=10344.10 00:22:17.600 clat (usec): min=767, max=7986, avg=4303.42, stdev=598.15 00:22:17.600 lat (usec): min=788, max=7999, avg=4324.40, stdev=597.07 00:22:17.600 clat percentiles (usec): 00:22:17.600 | 1.00th=[ 2999], 5.00th=[ 3654], 10.00th=[ 3818], 20.00th=[ 4015], 00:22:17.600 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:22:17.600 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 5866], 00:22:17.600 | 99.00th=[ 6390], 99.50th=[ 6915], 99.90th=[ 7701], 99.95th=[ 7701], 00:22:17.600 | 99.99th=[ 7963] 00:22:17.601 bw ( KiB/s): min=14000, max=14896, per=24.90%, avg=14590.22, stdev=265.75, samples=9 00:22:17.601 iops : min= 1750, max= 1862, avg=1823.78, stdev=33.22, samples=9 00:22:17.601 lat (usec) : 1000=0.11% 00:22:17.601 lat (msec) : 2=0.36%, 4=19.06%, 10=80.47% 00:22:17.601 cpu : usr=94.44%, sys=4.96%, ctx=10, majf=0, minf=67 00:22:17.601 IO depths : 1=0.1%, 2=12.3%, 4=60.9%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.601 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.601 issued rwts: total=9143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.601 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:17.601 filename0: (groupid=0, jobs=1): err= 0: pid=1780994: Thu Apr 18 17:09:32 2024 00:22:17.601 read: IOPS=1839, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5001msec) 00:22:17.601 slat (usec): min=4, max=139, avg=16.70, stdev= 9.92 00:22:17.601 clat (usec): min=831, max=8076, avg=4294.10, stdev=511.78 00:22:17.601 lat (usec): min=843, max=8098, avg=4310.80, stdev=511.34 00:22:17.601 clat percentiles (usec): 00:22:17.601 | 1.00th=[ 3195], 5.00th=[ 3687], 10.00th=[ 3851], 20.00th=[ 3982], 00:22:17.601 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:22:17.601 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 5473], 00:22:17.601 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 7177], 99.95th=[ 7308], 00:22:17.601 | 99.99th=[ 8094] 00:22:17.601 bw ( KiB/s): min=13952, max=15072, per=25.07%, avg=14693.33, stdev=355.44, samples=9 00:22:17.601 iops : min= 1744, max= 1884, avg=1836.67, stdev=44.43, samples=9 00:22:17.601 lat (usec) : 1000=0.01% 00:22:17.601 lat (msec) : 2=0.07%, 4=20.20%, 10=79.73% 00:22:17.601 cpu : usr=93.74%, sys=5.56%, ctx=20, majf=0, minf=111 00:22:17.601 IO depths : 1=0.2%, 2=10.9%, 4=61.7%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.601 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.601 issued rwts: total=9199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.601 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:17.601 filename1: (groupid=0, jobs=1): err= 0: pid=1780995: Thu Apr 18 17:09:32 2024 00:22:17.601 read: IOPS=1835, BW=14.3MiB/s (15.0MB/s)(71.8MiB/5004msec) 00:22:17.601 slat (nsec): min=4223, max=74095, avg=19480.80, stdev=10865.27 00:22:17.601 clat (usec): min=744, max=7805, avg=4293.12, stdev=534.34 00:22:17.601 lat (usec): min=766, max=7818, avg=4312.60, stdev=533.63 00:22:17.601 clat percentiles (usec): 00:22:17.601 | 1.00th=[ 3097], 5.00th=[ 3654], 10.00th=[ 3884], 20.00th=[ 4015], 00:22:17.601 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:22:17.601 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 5669], 00:22:17.601 | 99.00th=[ 6128], 99.50th=[ 6521], 99.90th=[ 7111], 99.95th=[ 7504], 00:22:17.601 | 99.99th=[ 7832] 00:22:17.601 bw ( KiB/s): min=13979, max=14976, per=25.06%, avg=14684.30, stdev=301.37, samples=10 00:22:17.601 iops : min= 1747, max= 1872, avg=1835.50, stdev=37.77, samples=10 00:22:17.601 lat (usec) : 750=0.01%, 1000=0.01% 00:22:17.601 lat (msec) : 2=0.20%, 4=19.46%, 10=80.32% 00:22:17.601 cpu : usr=94.32%, sys=5.00%, ctx=27, majf=0, minf=108 00:22:17.601 IO depths : 1=0.1%, 2=10.7%, 4=62.8%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.601 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.601 issued rwts: total=9184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.601 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:17.601 filename1: (groupid=0, jobs=1): err= 0: pid=1780996: Thu Apr 18 17:09:32 2024 00:22:17.601 read: IOPS=1824, BW=14.3MiB/s (14.9MB/s)(71.3MiB/5002msec) 00:22:17.601 slat (nsec): min=4001, max=63910, avg=18834.78, stdev=10067.05 00:22:17.601 clat (usec): min=822, max=8459, avg=4315.97, stdev=584.53 00:22:17.601 lat (usec): min=835, max=8471, avg=4334.81, stdev=583.81 00:22:17.601 clat percentiles (usec): 00:22:17.601 | 1.00th=[ 2933], 5.00th=[ 3687], 10.00th=[ 3884], 20.00th=[ 4015], 00:22:17.601 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:22:17.601 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 5800], 00:22:17.601 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7570], 99.95th=[ 7701], 00:22:17.601 | 99.99th=[ 8455] 00:22:17.601 bw ( KiB/s): min=14080, max=14848, per=24.88%, avg=14581.33, stdev=288.89, samples=9 00:22:17.601 iops : min= 1760, max= 1856, avg=1822.67, stdev=36.11, samples=9 00:22:17.601 lat (usec) : 1000=0.02% 00:22:17.601 lat (msec) : 2=0.28%, 4=17.74%, 10=81.96% 00:22:17.601 cpu : usr=94.32%, sys=5.04%, ctx=19, majf=0, minf=73 00:22:17.601 IO depths : 1=0.1%, 2=14.2%, 4=59.2%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.601 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.601 issued rwts: total=9128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.601 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:17.601 00:22:17.601 Run status group 0 (all jobs): 00:22:17.601 READ: bw=57.2MiB/s (60.0MB/s), 14.3MiB/s-14.4MiB/s (14.9MB/s-15.1MB/s), io=286MiB (300MB), run=5001-5004msec 00:22:17.601 17:09:32 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:17.601 17:09:32 -- target/dif.sh@43 -- # local sub 00:22:17.601 17:09:32 -- target/dif.sh@45 -- # for sub in "$@" 00:22:17.601 17:09:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:17.601 17:09:32 -- target/dif.sh@36 -- # local sub_id=0 00:22:17.601 17:09:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:17.601 17:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.601 17:09:32 -- common/autotest_common.sh@10 -- # set +x 00:22:17.601 17:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.601 17:09:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:17.601 17:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.601 17:09:32 -- common/autotest_common.sh@10 -- # set +x 00:22:17.601 17:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.601 17:09:32 -- target/dif.sh@45 -- # for sub in "$@" 00:22:17.601 17:09:32 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:17.601 17:09:32 -- target/dif.sh@36 -- # local sub_id=1 00:22:17.601 17:09:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:17.601 17:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.601 17:09:32 -- common/autotest_common.sh@10 -- # set +x 00:22:17.601 17:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.601 17:09:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:17.601 17:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.601 17:09:32 -- common/autotest_common.sh@10 -- # set +x 00:22:17.601 17:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.601 00:22:17.601 real 0m24.203s 00:22:17.601 user 4m32.325s 00:22:17.601 sys 0m7.541s 00:22:17.601 17:09:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:17.601 17:09:32 -- common/autotest_common.sh@10 -- # set +x 00:22:17.601 ************************************ 00:22:17.601 END TEST fio_dif_rand_params 00:22:17.601 ************************************ 00:22:17.601 17:09:32 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:17.601 17:09:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:17.601 17:09:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:17.601 17:09:32 -- common/autotest_common.sh@10 -- # set +x 00:22:17.601 ************************************ 00:22:17.601 START TEST fio_dif_digest 00:22:17.601 ************************************ 00:22:17.601 17:09:33 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:22:17.601 17:09:33 -- target/dif.sh@123 -- # local NULL_DIF 00:22:17.601 17:09:33 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:17.601 17:09:33 -- target/dif.sh@125 -- # local hdgst ddgst 00:22:17.601 17:09:33 -- target/dif.sh@127 -- # NULL_DIF=3 00:22:17.601 17:09:33 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:17.601 17:09:33 -- target/dif.sh@127 -- # numjobs=3 00:22:17.601 17:09:33 -- target/dif.sh@127 -- # iodepth=3 00:22:17.601 17:09:33 -- target/dif.sh@127 -- # runtime=10 00:22:17.601 17:09:33 -- target/dif.sh@128 -- # hdgst=true 00:22:17.601 17:09:33 -- target/dif.sh@128 -- # ddgst=true 00:22:17.601 17:09:33 -- target/dif.sh@130 -- # create_subsystems 0 00:22:17.601 17:09:33 -- target/dif.sh@28 -- # local sub 00:22:17.601 17:09:33 -- target/dif.sh@30 -- # for sub in "$@" 00:22:17.601 17:09:33 -- target/dif.sh@31 -- # create_subsystem 0 00:22:17.601 17:09:33 -- target/dif.sh@18 -- # local sub_id=0 00:22:17.601 17:09:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:17.601 17:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.601 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:17.601 bdev_null0 00:22:17.601 17:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.601 17:09:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:17.601 17:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.601 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:17.601 17:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.601 17:09:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:17.601 17:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.601 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:17.601 17:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.601 17:09:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:17.601 17:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.601 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:17.601 [2024-04-18 17:09:33.094170] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.601 17:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.601 17:09:33 -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:17.601 17:09:33 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:17.601 17:09:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:17.601 17:09:33 -- nvmf/common.sh@521 -- # config=() 00:22:17.601 17:09:33 -- nvmf/common.sh@521 -- # local subsystem config 00:22:17.601 17:09:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:17.601 17:09:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:17.601 17:09:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:17.601 { 00:22:17.601 "params": { 00:22:17.602 "name": "Nvme$subsystem", 00:22:17.602 "trtype": "$TEST_TRANSPORT", 00:22:17.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.602 "adrfam": "ipv4", 00:22:17.602 "trsvcid": "$NVMF_PORT", 00:22:17.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.602 "hdgst": ${hdgst:-false}, 00:22:17.602 "ddgst": ${ddgst:-false} 00:22:17.602 }, 00:22:17.602 "method": "bdev_nvme_attach_controller" 00:22:17.602 } 00:22:17.602 EOF 00:22:17.602 )") 00:22:17.602 17:09:33 -- target/dif.sh@82 -- # gen_fio_conf 00:22:17.602 17:09:33 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:17.602 17:09:33 -- target/dif.sh@54 -- # local file 00:22:17.602 17:09:33 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:17.602 17:09:33 -- target/dif.sh@56 -- # cat 00:22:17.602 17:09:33 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:17.602 17:09:33 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:17.602 17:09:33 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:17.602 17:09:33 -- common/autotest_common.sh@1327 -- # shift 00:22:17.602 17:09:33 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:17.602 17:09:33 -- nvmf/common.sh@543 -- # cat 00:22:17.602 17:09:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.602 17:09:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:17.602 17:09:33 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:17.602 17:09:33 -- target/dif.sh@72 -- # (( file <= files )) 00:22:17.602 17:09:33 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:17.602 17:09:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:17.602 17:09:33 -- nvmf/common.sh@545 -- # jq . 00:22:17.602 17:09:33 -- nvmf/common.sh@546 -- # IFS=, 00:22:17.602 17:09:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:17.602 "params": { 00:22:17.602 "name": "Nvme0", 00:22:17.602 "trtype": "tcp", 00:22:17.602 "traddr": "10.0.0.2", 00:22:17.602 "adrfam": "ipv4", 00:22:17.602 "trsvcid": "4420", 00:22:17.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:17.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:17.602 "hdgst": true, 00:22:17.602 "ddgst": true 00:22:17.602 }, 00:22:17.602 "method": "bdev_nvme_attach_controller" 00:22:17.602 }' 00:22:17.602 17:09:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:17.602 17:09:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:17.602 17:09:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.602 17:09:33 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:17.602 17:09:33 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:17.602 17:09:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:17.602 17:09:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:17.602 17:09:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:17.602 17:09:33 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:17.602 17:09:33 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:17.860 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:17.860 ... 00:22:17.860 fio-3.35 00:22:17.860 Starting 3 threads 00:22:17.860 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.058 00:22:30.058 filename0: (groupid=0, jobs=1): err= 0: pid=1781760: Thu Apr 18 17:09:43 2024 00:22:30.058 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10049msec) 00:22:30.058 slat (nsec): min=7633, max=75174, avg=16448.94, stdev=6444.24 00:22:30.058 clat (usec): min=11451, max=55442, avg=15099.36, stdev=1646.44 00:22:30.058 lat (usec): min=11464, max=55455, avg=15115.81, stdev=1646.06 00:22:30.058 clat percentiles (usec): 00:22:30.058 | 1.00th=[12125], 5.00th=[13173], 10.00th=[13698], 20.00th=[14091], 00:22:30.058 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:22:30.058 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:22:30.058 | 99.00th=[17957], 99.50th=[18482], 99.90th=[49021], 99.95th=[55313], 00:22:30.058 | 99.99th=[55313] 00:22:30.058 bw ( KiB/s): min=24320, max=27136, per=33.98%, avg=25446.40, stdev=806.98, samples=20 00:22:30.058 iops : min= 190, max= 212, avg=198.80, stdev= 6.30, samples=20 00:22:30.058 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:22:30.058 cpu : usr=91.23%, sys=8.23%, ctx=40, majf=0, minf=185 00:22:30.058 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:30.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.058 issued rwts: total=1991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.058 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:30.058 filename0: (groupid=0, jobs=1): err= 0: pid=1781761: Thu Apr 18 17:09:43 2024 00:22:30.058 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(246MiB/10007msec) 00:22:30.058 slat (nsec): min=7055, max=37373, avg=13972.53, stdev=3904.43 00:22:30.058 clat (usec): min=9157, max=22839, avg=15248.42, stdev=1248.48 00:22:30.058 lat (usec): min=9170, max=22859, avg=15262.39, stdev=1248.41 00:22:30.058 clat percentiles (usec): 00:22:30.058 | 1.00th=[12518], 5.00th=[13304], 10.00th=[13698], 20.00th=[14222], 00:22:30.058 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:22:30.058 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:22:30.058 | 99.00th=[18482], 99.50th=[19006], 99.90th=[22938], 99.95th=[22938], 00:22:30.058 | 99.99th=[22938] 00:22:30.058 bw ( KiB/s): min=24064, max=26368, per=33.57%, avg=25141.60, stdev=732.74, samples=20 00:22:30.058 iops : min= 188, max= 206, avg=196.40, stdev= 5.75, samples=20 00:22:30.058 lat (msec) : 10=0.05%, 20=99.64%, 50=0.31% 00:22:30.058 cpu : usr=90.38%, sys=9.13%, ctx=30, majf=0, minf=141 00:22:30.058 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:30.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.058 issued rwts: total=1966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.058 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:30.058 filename0: (groupid=0, jobs=1): err= 0: pid=1781762: Thu Apr 18 17:09:43 2024 00:22:30.058 read: IOPS=192, BW=24.0MiB/s (25.2MB/s)(240MiB/10008msec) 00:22:30.058 slat (nsec): min=7367, max=48106, avg=14231.83, stdev=4091.72 00:22:30.058 clat (usec): min=7613, max=23685, avg=15601.53, stdev=1209.55 00:22:30.058 lat (usec): min=7621, max=23703, avg=15615.77, stdev=1209.57 00:22:30.058 clat percentiles (usec): 00:22:30.058 | 1.00th=[12911], 5.00th=[13829], 10.00th=[14222], 20.00th=[14746], 00:22:30.058 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:22:30.058 | 70.00th=[16057], 80.00th=[16581], 90.00th=[16909], 95.00th=[17433], 00:22:30.058 | 99.00th=[18744], 99.50th=[19792], 99.90th=[23725], 99.95th=[23725], 00:22:30.058 | 99.99th=[23725] 00:22:30.058 bw ( KiB/s): min=23296, max=25856, per=32.80%, avg=24563.20, stdev=645.90, samples=20 00:22:30.058 iops : min= 182, max= 202, avg=191.90, stdev= 5.05, samples=20 00:22:30.058 lat (msec) : 10=0.10%, 20=99.58%, 50=0.31% 00:22:30.059 cpu : usr=90.87%, sys=8.64%, ctx=22, majf=0, minf=100 00:22:30.059 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:30.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.059 issued rwts: total=1922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.059 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:30.059 00:22:30.059 Run status group 0 (all jobs): 00:22:30.059 READ: bw=73.1MiB/s (76.7MB/s), 24.0MiB/s-24.8MiB/s (25.2MB/s-26.0MB/s), io=735MiB (771MB), run=10007-10049msec 00:22:30.059 17:09:44 -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:30.059 17:09:44 -- target/dif.sh@43 -- # local sub 00:22:30.059 17:09:44 -- target/dif.sh@45 -- # for sub in "$@" 00:22:30.059 17:09:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:30.059 17:09:44 -- target/dif.sh@36 -- # local sub_id=0 00:22:30.059 17:09:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:30.059 17:09:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.059 17:09:44 -- common/autotest_common.sh@10 -- # set +x 00:22:30.059 17:09:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.059 17:09:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:30.059 17:09:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.059 17:09:44 -- common/autotest_common.sh@10 -- # set +x 00:22:30.059 17:09:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.059 00:22:30.059 real 0m11.088s 00:22:30.059 user 0m28.299s 00:22:30.059 sys 0m2.849s 00:22:30.059 17:09:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:30.059 17:09:44 -- common/autotest_common.sh@10 -- # set +x 00:22:30.059 ************************************ 00:22:30.059 END TEST fio_dif_digest 00:22:30.059 ************************************ 00:22:30.059 17:09:44 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:30.059 17:09:44 -- target/dif.sh@147 -- # nvmftestfini 00:22:30.059 17:09:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:30.059 17:09:44 -- nvmf/common.sh@117 -- # sync 00:22:30.059 17:09:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:30.059 17:09:44 -- nvmf/common.sh@120 -- # set +e 00:22:30.059 17:09:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:30.059 17:09:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:30.059 rmmod nvme_tcp 00:22:30.059 rmmod nvme_fabrics 00:22:30.059 rmmod nvme_keyring 00:22:30.059 17:09:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:30.059 17:09:44 -- nvmf/common.sh@124 -- # set -e 00:22:30.059 17:09:44 -- nvmf/common.sh@125 -- # return 0 00:22:30.059 17:09:44 -- nvmf/common.sh@478 -- # '[' -n 1774916 ']' 00:22:30.059 17:09:44 -- nvmf/common.sh@479 -- # killprocess 1774916 00:22:30.059 17:09:44 -- common/autotest_common.sh@936 -- # '[' -z 1774916 ']' 00:22:30.059 17:09:44 -- common/autotest_common.sh@940 -- # kill -0 1774916 00:22:30.059 17:09:44 -- common/autotest_common.sh@941 -- # uname 00:22:30.059 17:09:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.059 17:09:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1774916 00:22:30.059 17:09:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:30.059 17:09:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:30.059 17:09:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1774916' 00:22:30.059 killing process with pid 1774916 00:22:30.059 17:09:44 -- common/autotest_common.sh@955 -- # kill 1774916 00:22:30.059 17:09:44 -- common/autotest_common.sh@960 -- # wait 1774916 00:22:30.059 17:09:44 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:22:30.059 17:09:44 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:30.059 Waiting for block devices as requested 00:22:30.059 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:30.059 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:30.059 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:30.317 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:30.317 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:30.317 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:30.317 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:30.576 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:30.576 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:30.576 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:30.576 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:30.834 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:30.834 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:30.834 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:30.834 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:31.093 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:31.093 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:31.093 17:09:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:31.093 17:09:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:31.093 17:09:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.093 17:09:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.093 17:09:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.093 17:09:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:31.093 17:09:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.623 17:09:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:33.623 00:22:33.623 real 1m7.216s 00:22:33.623 user 6m28.622s 00:22:33.623 sys 0m19.870s 00:22:33.623 17:09:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:33.623 17:09:48 -- common/autotest_common.sh@10 -- # set +x 00:22:33.623 ************************************ 00:22:33.623 END TEST nvmf_dif 00:22:33.623 ************************************ 00:22:33.623 17:09:48 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:33.623 17:09:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:33.623 17:09:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:33.623 17:09:48 -- common/autotest_common.sh@10 -- # set +x 00:22:33.623 ************************************ 00:22:33.623 START TEST nvmf_abort_qd_sizes 00:22:33.623 ************************************ 00:22:33.623 17:09:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:33.623 * Looking for test storage... 00:22:33.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:33.623 17:09:48 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.623 17:09:48 -- nvmf/common.sh@7 -- # uname -s 00:22:33.623 17:09:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.623 17:09:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.623 17:09:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.623 17:09:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.623 17:09:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.623 17:09:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.623 17:09:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.623 17:09:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.623 17:09:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.623 17:09:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.623 17:09:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.623 17:09:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.623 17:09:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.624 17:09:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.624 17:09:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.624 17:09:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.624 17:09:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.624 17:09:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.624 17:09:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.624 17:09:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.624 17:09:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.624 17:09:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.624 17:09:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.624 17:09:48 -- paths/export.sh@5 -- # export PATH 00:22:33.624 17:09:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.624 17:09:48 -- nvmf/common.sh@47 -- # : 0 00:22:33.624 17:09:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:33.624 17:09:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:33.624 17:09:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.624 17:09:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.624 17:09:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.624 17:09:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:33.624 17:09:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:33.624 17:09:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:33.624 17:09:48 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:33.624 17:09:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:33.624 17:09:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.624 17:09:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:33.624 17:09:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:33.624 17:09:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:33.624 17:09:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.624 17:09:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:33.624 17:09:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.624 17:09:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:33.624 17:09:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:33.624 17:09:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:33.624 17:09:48 -- common/autotest_common.sh@10 -- # set +x 00:22:35.524 17:09:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:35.524 17:09:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:35.524 17:09:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:35.524 17:09:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:35.524 17:09:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:35.524 17:09:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:35.524 17:09:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:35.524 17:09:50 -- nvmf/common.sh@295 -- # net_devs=() 00:22:35.524 17:09:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:35.524 17:09:50 -- nvmf/common.sh@296 -- # e810=() 00:22:35.524 17:09:50 -- nvmf/common.sh@296 -- # local -ga e810 00:22:35.524 17:09:50 -- nvmf/common.sh@297 -- # x722=() 00:22:35.525 17:09:50 -- nvmf/common.sh@297 -- # local -ga x722 00:22:35.525 17:09:50 -- nvmf/common.sh@298 -- # mlx=() 00:22:35.525 17:09:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:35.525 17:09:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.525 17:09:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.525 17:09:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.525 17:09:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.525 17:09:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.525 17:09:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.525 17:09:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.525 17:09:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.525 17:09:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.525 17:09:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.525 17:09:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.525 17:09:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:35.525 17:09:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:35.525 17:09:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:35.525 17:09:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.525 17:09:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:35.525 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:35.525 17:09:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.525 17:09:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:35.525 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:35.525 17:09:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:35.525 17:09:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.525 17:09:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.525 17:09:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:35.525 17:09:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.525 17:09:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:35.525 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:35.525 17:09:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.525 17:09:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.525 17:09:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.525 17:09:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:35.525 17:09:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.525 17:09:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:35.525 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:35.525 17:09:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.525 17:09:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:35.525 17:09:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:35.525 17:09:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:35.525 17:09:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:35.525 17:09:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.525 17:09:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.525 17:09:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.525 17:09:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:35.525 17:09:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.525 17:09:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.525 17:09:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:35.525 17:09:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.525 17:09:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.525 17:09:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:35.525 17:09:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:35.525 17:09:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.525 17:09:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.525 17:09:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.525 17:09:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.525 17:09:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:35.525 17:09:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.525 17:09:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.525 17:09:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.525 17:09:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:35.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:22:35.525 00:22:35.525 --- 10.0.0.2 ping statistics --- 00:22:35.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.525 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:22:35.525 17:09:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:22:35.525 00:22:35.525 --- 10.0.0.1 ping statistics --- 00:22:35.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.525 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:22:35.525 17:09:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.525 17:09:51 -- nvmf/common.sh@411 -- # return 0 00:22:35.525 17:09:51 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:22:35.525 17:09:51 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:36.901 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:36.901 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:36.901 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:36.901 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:36.901 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:36.901 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:36.901 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:36.901 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:36.901 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:36.901 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:36.901 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:36.901 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:36.901 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:36.902 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:36.902 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:36.902 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:37.839 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:22:37.839 17:09:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.839 17:09:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:37.839 17:09:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:37.839 17:09:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.839 17:09:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:37.839 17:09:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:37.839 17:09:53 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:37.839 17:09:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:37.839 17:09:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:37.839 17:09:53 -- common/autotest_common.sh@10 -- # set +x 00:22:37.839 17:09:53 -- nvmf/common.sh@470 -- # nvmfpid=1786592 00:22:37.839 17:09:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:37.839 17:09:53 -- nvmf/common.sh@471 -- # waitforlisten 1786592 00:22:37.839 17:09:53 -- common/autotest_common.sh@817 -- # '[' -z 1786592 ']' 00:22:37.839 17:09:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.839 17:09:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:37.839 17:09:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.839 17:09:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:37.839 17:09:53 -- common/autotest_common.sh@10 -- # set +x 00:22:37.839 [2024-04-18 17:09:53.430295] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:37.839 [2024-04-18 17:09:53.430375] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.839 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.839 [2024-04-18 17:09:53.493982] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.097 [2024-04-18 17:09:53.602506] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.097 [2024-04-18 17:09:53.602557] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.097 [2024-04-18 17:09:53.602586] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.097 [2024-04-18 17:09:53.602598] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.097 [2024-04-18 17:09:53.602608] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.097 [2024-04-18 17:09:53.602664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.097 [2024-04-18 17:09:53.602724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.097 [2024-04-18 17:09:53.602774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.097 [2024-04-18 17:09:53.602777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.097 17:09:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:38.097 17:09:53 -- common/autotest_common.sh@850 -- # return 0 00:22:38.097 17:09:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:38.097 17:09:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:38.097 17:09:53 -- common/autotest_common.sh@10 -- # set +x 00:22:38.097 17:09:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.097 17:09:53 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:38.097 17:09:53 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:38.097 17:09:53 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:38.097 17:09:53 -- scripts/common.sh@309 -- # local bdf bdfs 00:22:38.097 17:09:53 -- scripts/common.sh@310 -- # local nvmes 00:22:38.097 17:09:53 -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:22:38.097 17:09:53 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:22:38.097 17:09:53 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:22:38.097 17:09:53 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:22:38.097 17:09:53 -- scripts/common.sh@320 -- # uname -s 00:22:38.097 17:09:53 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:22:38.097 17:09:53 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:22:38.097 17:09:53 -- scripts/common.sh@325 -- # (( 1 )) 00:22:38.097 17:09:53 -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:22:38.097 17:09:53 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:22:38.097 17:09:53 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:22:38.097 17:09:53 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:38.097 17:09:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:38.097 17:09:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:38.097 17:09:53 -- common/autotest_common.sh@10 -- # set +x 00:22:38.356 ************************************ 00:22:38.356 START TEST spdk_target_abort 00:22:38.356 ************************************ 00:22:38.356 17:09:53 -- common/autotest_common.sh@1111 -- # spdk_target 00:22:38.356 17:09:53 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:38.356 17:09:53 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:22:38.356 17:09:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.356 17:09:53 -- common/autotest_common.sh@10 -- # set +x 00:22:41.695 spdk_targetn1 00:22:41.695 17:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:41.695 17:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.695 17:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:41.695 [2024-04-18 17:09:56.712634] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.695 17:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:41.695 17:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.695 17:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:41.695 17:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:41.695 17:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.695 17:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:41.695 17:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:22:41.695 17:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.695 17:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:41.695 [2024-04-18 17:09:56.744904] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.695 17:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:41.695 17:09:56 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:41.695 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.224 Initializing NVMe Controllers 00:22:44.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:44.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:44.224 Initialization complete. Launching workers. 00:22:44.224 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11303, failed: 0 00:22:44.224 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1296, failed to submit 10007 00:22:44.224 success 742, unsuccess 554, failed 0 00:22:44.224 17:09:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:44.224 17:09:59 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:44.224 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.501 Initializing NVMe Controllers 00:22:47.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:47.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:47.501 Initialization complete. Launching workers. 00:22:47.501 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8441, failed: 0 00:22:47.501 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1261, failed to submit 7180 00:22:47.501 success 315, unsuccess 946, failed 0 00:22:47.501 17:10:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:47.501 17:10:03 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:47.501 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.778 Initializing NVMe Controllers 00:22:50.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:50.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:50.778 Initialization complete. Launching workers. 00:22:50.778 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31920, failed: 0 00:22:50.778 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2672, failed to submit 29248 00:22:50.778 success 509, unsuccess 2163, failed 0 00:22:50.778 17:10:06 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:22:50.778 17:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.778 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:22:50.778 17:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.778 17:10:06 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:22:50.778 17:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.778 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:22:52.148 17:10:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.148 17:10:07 -- target/abort_qd_sizes.sh@61 -- # killprocess 1786592 00:22:52.148 17:10:07 -- common/autotest_common.sh@936 -- # '[' -z 1786592 ']' 00:22:52.148 17:10:07 -- common/autotest_common.sh@940 -- # kill -0 1786592 00:22:52.148 17:10:07 -- common/autotest_common.sh@941 -- # uname 00:22:52.148 17:10:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:52.148 17:10:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1786592 00:22:52.148 17:10:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:52.148 17:10:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:52.148 17:10:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1786592' 00:22:52.148 killing process with pid 1786592 00:22:52.148 17:10:07 -- common/autotest_common.sh@955 -- # kill 1786592 00:22:52.148 17:10:07 -- common/autotest_common.sh@960 -- # wait 1786592 00:22:52.407 00:22:52.407 real 0m14.018s 00:22:52.407 user 0m53.460s 00:22:52.407 sys 0m2.523s 00:22:52.407 17:10:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:52.407 17:10:07 -- common/autotest_common.sh@10 -- # set +x 00:22:52.407 ************************************ 00:22:52.407 END TEST spdk_target_abort 00:22:52.407 ************************************ 00:22:52.407 17:10:07 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:22:52.407 17:10:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:52.407 17:10:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:52.407 17:10:07 -- common/autotest_common.sh@10 -- # set +x 00:22:52.407 ************************************ 00:22:52.407 START TEST kernel_target_abort 00:22:52.407 ************************************ 00:22:52.407 17:10:08 -- common/autotest_common.sh@1111 -- # kernel_target 00:22:52.407 17:10:08 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:22:52.407 17:10:08 -- nvmf/common.sh@717 -- # local ip 00:22:52.407 17:10:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:52.407 17:10:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:52.407 17:10:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.407 17:10:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.407 17:10:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:52.407 17:10:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.407 17:10:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:52.407 17:10:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:52.407 17:10:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:52.407 17:10:08 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:52.407 17:10:08 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:52.407 17:10:08 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:22:52.407 17:10:08 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:52.407 17:10:08 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:52.407 17:10:08 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:52.407 17:10:08 -- nvmf/common.sh@628 -- # local block nvme 00:22:52.407 17:10:08 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:22:52.407 17:10:08 -- nvmf/common.sh@631 -- # modprobe nvmet 00:22:52.407 17:10:08 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:52.407 17:10:08 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:53.783 Waiting for block devices as requested 00:22:53.783 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:53.783 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:53.783 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:53.783 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:53.783 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:53.783 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:54.041 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:54.041 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:54.041 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:54.041 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:54.300 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:54.300 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:54.300 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:54.300 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:54.559 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:54.559 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:54.559 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:54.818 17:10:10 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:54.818 17:10:10 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:54.818 17:10:10 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:22:54.818 17:10:10 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:54.818 17:10:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:54.818 17:10:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:54.818 17:10:10 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:22:54.818 17:10:10 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:54.818 17:10:10 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:54.818 No valid GPT data, bailing 00:22:54.818 17:10:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:54.818 17:10:10 -- scripts/common.sh@391 -- # pt= 00:22:54.818 17:10:10 -- scripts/common.sh@392 -- # return 1 00:22:54.818 17:10:10 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:22:54.818 17:10:10 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:22:54.818 17:10:10 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:54.818 17:10:10 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:54.818 17:10:10 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:54.818 17:10:10 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:54.818 17:10:10 -- nvmf/common.sh@656 -- # echo 1 00:22:54.818 17:10:10 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:22:54.818 17:10:10 -- nvmf/common.sh@658 -- # echo 1 00:22:54.818 17:10:10 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:22:54.818 17:10:10 -- nvmf/common.sh@661 -- # echo tcp 00:22:54.818 17:10:10 -- nvmf/common.sh@662 -- # echo 4420 00:22:54.818 17:10:10 -- nvmf/common.sh@663 -- # echo ipv4 00:22:54.818 17:10:10 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:54.818 17:10:10 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:54.818 00:22:54.818 Discovery Log Number of Records 2, Generation counter 2 00:22:54.818 =====Discovery Log Entry 0====== 00:22:54.818 trtype: tcp 00:22:54.818 adrfam: ipv4 00:22:54.818 subtype: current discovery subsystem 00:22:54.818 treq: not specified, sq flow control disable supported 00:22:54.818 portid: 1 00:22:54.818 trsvcid: 4420 00:22:54.818 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:54.818 traddr: 10.0.0.1 00:22:54.818 eflags: none 00:22:54.818 sectype: none 00:22:54.818 =====Discovery Log Entry 1====== 00:22:54.818 trtype: tcp 00:22:54.818 adrfam: ipv4 00:22:54.818 subtype: nvme subsystem 00:22:54.818 treq: not specified, sq flow control disable supported 00:22:54.818 portid: 1 00:22:54.818 trsvcid: 4420 00:22:54.818 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:54.818 traddr: 10.0.0.1 00:22:54.818 eflags: none 00:22:54.818 sectype: none 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:54.818 17:10:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:54.818 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.100 Initializing NVMe Controllers 00:22:58.100 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:58.100 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:58.100 Initialization complete. Launching workers. 00:22:58.100 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38977, failed: 0 00:22:58.100 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38977, failed to submit 0 00:22:58.100 success 0, unsuccess 38977, failed 0 00:22:58.100 17:10:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:58.100 17:10:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:58.100 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.382 Initializing NVMe Controllers 00:23:01.382 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:01.382 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:01.382 Initialization complete. Launching workers. 00:23:01.382 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72875, failed: 0 00:23:01.382 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18366, failed to submit 54509 00:23:01.382 success 0, unsuccess 18366, failed 0 00:23:01.382 17:10:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:01.382 17:10:16 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:01.382 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.911 Initializing NVMe Controllers 00:23:03.911 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:03.911 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:03.911 Initialization complete. Launching workers. 00:23:03.911 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69585, failed: 0 00:23:03.911 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17378, failed to submit 52207 00:23:03.911 success 0, unsuccess 17378, failed 0 00:23:03.911 17:10:19 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:03.911 17:10:19 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:03.911 17:10:19 -- nvmf/common.sh@675 -- # echo 0 00:23:03.911 17:10:19 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:03.911 17:10:19 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:04.169 17:10:19 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:04.169 17:10:19 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:04.169 17:10:19 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:04.169 17:10:19 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:04.169 17:10:19 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:05.103 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:05.103 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:05.103 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:05.103 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:05.103 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:05.103 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:05.103 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:05.103 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:05.103 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:05.103 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:05.103 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:05.103 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:05.103 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:05.103 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:05.103 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:05.103 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:06.489 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:06.489 00:23:06.489 real 0m13.926s 00:23:06.489 user 0m5.609s 00:23:06.489 sys 0m3.169s 00:23:06.489 17:10:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:06.489 17:10:21 -- common/autotest_common.sh@10 -- # set +x 00:23:06.489 ************************************ 00:23:06.489 END TEST kernel_target_abort 00:23:06.489 ************************************ 00:23:06.489 17:10:21 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:06.489 17:10:21 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:06.489 17:10:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:06.489 17:10:21 -- nvmf/common.sh@117 -- # sync 00:23:06.489 17:10:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:06.489 17:10:21 -- nvmf/common.sh@120 -- # set +e 00:23:06.489 17:10:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:06.489 17:10:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:06.489 rmmod nvme_tcp 00:23:06.489 rmmod nvme_fabrics 00:23:06.489 rmmod nvme_keyring 00:23:06.489 17:10:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:06.489 17:10:21 -- nvmf/common.sh@124 -- # set -e 00:23:06.489 17:10:21 -- nvmf/common.sh@125 -- # return 0 00:23:06.489 17:10:21 -- nvmf/common.sh@478 -- # '[' -n 1786592 ']' 00:23:06.489 17:10:21 -- nvmf/common.sh@479 -- # killprocess 1786592 00:23:06.489 17:10:21 -- common/autotest_common.sh@936 -- # '[' -z 1786592 ']' 00:23:06.489 17:10:21 -- common/autotest_common.sh@940 -- # kill -0 1786592 00:23:06.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1786592) - No such process 00:23:06.489 17:10:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1786592 is not found' 00:23:06.489 Process with pid 1786592 is not found 00:23:06.489 17:10:21 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:23:06.489 17:10:21 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:07.424 Waiting for block devices as requested 00:23:07.683 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:07.683 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:07.683 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:07.683 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:07.941 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:07.941 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:07.941 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:07.941 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:07.941 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:08.198 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:08.198 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:08.198 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:08.456 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:08.456 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:08.456 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:08.456 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:08.715 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:08.715 17:10:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:08.715 17:10:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:08.715 17:10:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.715 17:10:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:08.715 17:10:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.715 17:10:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:08.715 17:10:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.620 17:10:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:10.620 00:23:10.620 real 0m37.393s 00:23:10.620 user 1m1.195s 00:23:10.620 sys 0m9.054s 00:23:10.620 17:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:10.620 17:10:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.620 ************************************ 00:23:10.621 END TEST nvmf_abort_qd_sizes 00:23:10.621 ************************************ 00:23:10.621 17:10:26 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:23:10.621 17:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:10.621 17:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:10.621 17:10:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.879 ************************************ 00:23:10.879 START TEST keyring_file 00:23:10.879 ************************************ 00:23:10.879 17:10:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:23:10.879 * Looking for test storage... 00:23:10.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:23:10.879 17:10:26 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:23:10.879 17:10:26 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.879 17:10:26 -- nvmf/common.sh@7 -- # uname -s 00:23:10.879 17:10:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.879 17:10:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.879 17:10:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.879 17:10:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.879 17:10:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.879 17:10:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.879 17:10:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.879 17:10:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.879 17:10:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.879 17:10:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.879 17:10:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.879 17:10:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.879 17:10:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.879 17:10:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.879 17:10:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.879 17:10:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.879 17:10:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.879 17:10:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.879 17:10:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.879 17:10:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.879 17:10:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.879 17:10:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.879 17:10:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.879 17:10:26 -- paths/export.sh@5 -- # export PATH 00:23:10.879 17:10:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.879 17:10:26 -- nvmf/common.sh@47 -- # : 0 00:23:10.879 17:10:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:10.879 17:10:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:10.879 17:10:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.879 17:10:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.879 17:10:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.879 17:10:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:10.879 17:10:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:10.879 17:10:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:10.879 17:10:26 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:10.879 17:10:26 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:10.879 17:10:26 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:10.879 17:10:26 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:10.879 17:10:26 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:10.879 17:10:26 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:10.879 17:10:26 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:10.879 17:10:26 -- keyring/common.sh@15 -- # local name key digest path 00:23:10.879 17:10:26 -- keyring/common.sh@17 -- # name=key0 00:23:10.879 17:10:26 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:10.879 17:10:26 -- keyring/common.sh@17 -- # digest=0 00:23:10.879 17:10:26 -- keyring/common.sh@18 -- # mktemp 00:23:10.879 17:10:26 -- keyring/common.sh@18 -- # path=/tmp/tmp.pepaIdJQEy 00:23:10.879 17:10:26 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:10.879 17:10:26 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:10.879 17:10:26 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:10.879 17:10:26 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:10.879 17:10:26 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:10.879 17:10:26 -- nvmf/common.sh@693 -- # digest=0 00:23:10.879 17:10:26 -- nvmf/common.sh@694 -- # python - 00:23:10.879 17:10:26 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pepaIdJQEy 00:23:10.879 17:10:26 -- keyring/common.sh@23 -- # echo /tmp/tmp.pepaIdJQEy 00:23:10.879 17:10:26 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.pepaIdJQEy 00:23:10.879 17:10:26 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:10.879 17:10:26 -- keyring/common.sh@15 -- # local name key digest path 00:23:10.879 17:10:26 -- keyring/common.sh@17 -- # name=key1 00:23:10.879 17:10:26 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:10.879 17:10:26 -- keyring/common.sh@17 -- # digest=0 00:23:10.879 17:10:26 -- keyring/common.sh@18 -- # mktemp 00:23:10.879 17:10:26 -- keyring/common.sh@18 -- # path=/tmp/tmp.ShHtpUtqkY 00:23:10.879 17:10:26 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:10.879 17:10:26 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:10.879 17:10:26 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:10.879 17:10:26 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:10.879 17:10:26 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:23:10.879 17:10:26 -- nvmf/common.sh@693 -- # digest=0 00:23:10.879 17:10:26 -- nvmf/common.sh@694 -- # python - 00:23:10.879 17:10:26 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ShHtpUtqkY 00:23:10.879 17:10:26 -- keyring/common.sh@23 -- # echo /tmp/tmp.ShHtpUtqkY 00:23:10.879 17:10:26 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ShHtpUtqkY 00:23:10.879 17:10:26 -- keyring/file.sh@30 -- # tgtpid=1792468 00:23:10.879 17:10:26 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:23:10.879 17:10:26 -- keyring/file.sh@32 -- # waitforlisten 1792468 00:23:10.879 17:10:26 -- common/autotest_common.sh@817 -- # '[' -z 1792468 ']' 00:23:10.879 17:10:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.879 17:10:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:10.879 17:10:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.879 17:10:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:10.879 17:10:26 -- common/autotest_common.sh@10 -- # set +x 00:23:11.138 [2024-04-18 17:10:26.621840] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:11.138 [2024-04-18 17:10:26.621934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792468 ] 00:23:11.138 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.138 [2024-04-18 17:10:26.681147] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.138 [2024-04-18 17:10:26.791053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.396 17:10:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:11.396 17:10:27 -- common/autotest_common.sh@850 -- # return 0 00:23:11.396 17:10:27 -- keyring/file.sh@33 -- # rpc_cmd 00:23:11.396 17:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.396 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:23:11.396 [2024-04-18 17:10:27.058136] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.396 null0 00:23:11.396 [2024-04-18 17:10:27.090188] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.396 [2024-04-18 17:10:27.090653] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:11.396 [2024-04-18 17:10:27.098204] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:11.396 17:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.654 17:10:27 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:11.654 17:10:27 -- common/autotest_common.sh@638 -- # local es=0 00:23:11.654 17:10:27 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:11.655 17:10:27 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:11.655 17:10:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:11.655 17:10:27 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:11.655 17:10:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:11.655 17:10:27 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:11.655 17:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.655 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:23:11.655 [2024-04-18 17:10:27.110224] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:23:11.655 { 00:23:11.655 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:11.655 "secure_channel": false, 00:23:11.655 "listen_address": { 00:23:11.655 "trtype": "tcp", 00:23:11.655 "traddr": "127.0.0.1", 00:23:11.655 "trsvcid": "4420" 00:23:11.655 }, 00:23:11.655 "method": "nvmf_subsystem_add_listener", 00:23:11.655 "req_id": 1 00:23:11.655 } 00:23:11.655 Got JSON-RPC error response 00:23:11.655 response: 00:23:11.655 { 00:23:11.655 "code": -32602, 00:23:11.655 "message": "Invalid parameters" 00:23:11.655 } 00:23:11.655 17:10:27 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:11.655 17:10:27 -- common/autotest_common.sh@641 -- # es=1 00:23:11.655 17:10:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:11.655 17:10:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:11.655 17:10:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:11.655 17:10:27 -- keyring/file.sh@46 -- # bperfpid=1792479 00:23:11.655 17:10:27 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:11.655 17:10:27 -- keyring/file.sh@48 -- # waitforlisten 1792479 /var/tmp/bperf.sock 00:23:11.655 17:10:27 -- common/autotest_common.sh@817 -- # '[' -z 1792479 ']' 00:23:11.655 17:10:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:11.655 17:10:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:11.655 17:10:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:11.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:11.655 17:10:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:11.655 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:23:11.655 [2024-04-18 17:10:27.157069] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:11.655 [2024-04-18 17:10:27.157130] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792479 ] 00:23:11.655 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.655 [2024-04-18 17:10:27.216908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.655 [2024-04-18 17:10:27.331221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.913 17:10:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:11.913 17:10:27 -- common/autotest_common.sh@850 -- # return 0 00:23:11.913 17:10:27 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pepaIdJQEy 00:23:11.913 17:10:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pepaIdJQEy 00:23:12.172 17:10:27 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ShHtpUtqkY 00:23:12.172 17:10:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ShHtpUtqkY 00:23:12.430 17:10:27 -- keyring/file.sh@51 -- # get_key key0 00:23:12.430 17:10:27 -- keyring/file.sh@51 -- # jq -r .path 00:23:12.430 17:10:27 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:12.430 17:10:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:12.430 17:10:27 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:12.688 17:10:28 -- keyring/file.sh@51 -- # [[ /tmp/tmp.pepaIdJQEy == \/\t\m\p\/\t\m\p\.\p\e\p\a\I\d\J\Q\E\y ]] 00:23:12.688 17:10:28 -- keyring/file.sh@52 -- # get_key key1 00:23:12.688 17:10:28 -- keyring/file.sh@52 -- # jq -r .path 00:23:12.688 17:10:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:12.688 17:10:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:12.688 17:10:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:12.945 17:10:28 -- keyring/file.sh@52 -- # [[ /tmp/tmp.ShHtpUtqkY == \/\t\m\p\/\t\m\p\.\S\h\H\t\p\U\t\q\k\Y ]] 00:23:12.945 17:10:28 -- keyring/file.sh@53 -- # get_refcnt key0 00:23:12.945 17:10:28 -- keyring/common.sh@12 -- # get_key key0 00:23:12.945 17:10:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:12.945 17:10:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:12.945 17:10:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:12.945 17:10:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:13.203 17:10:28 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:13.203 17:10:28 -- keyring/file.sh@54 -- # get_refcnt key1 00:23:13.203 17:10:28 -- keyring/common.sh@12 -- # get_key key1 00:23:13.203 17:10:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:13.203 17:10:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:13.203 17:10:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:13.203 17:10:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:13.460 17:10:28 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:13.460 17:10:28 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:13.460 17:10:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:13.461 [2024-04-18 17:10:29.137621] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.718 nvme0n1 00:23:13.718 17:10:29 -- keyring/file.sh@59 -- # get_refcnt key0 00:23:13.718 17:10:29 -- keyring/common.sh@12 -- # get_key key0 00:23:13.718 17:10:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:13.718 17:10:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:13.718 17:10:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:13.718 17:10:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:13.976 17:10:29 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:13.976 17:10:29 -- keyring/file.sh@60 -- # get_refcnt key1 00:23:13.976 17:10:29 -- keyring/common.sh@12 -- # get_key key1 00:23:13.976 17:10:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:13.976 17:10:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:13.976 17:10:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:13.976 17:10:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:14.234 17:10:29 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:14.234 17:10:29 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:14.234 Running I/O for 1 seconds... 00:23:15.168 00:23:15.168 Latency(us) 00:23:15.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.168 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:15.168 nvme0n1 : 1.01 7417.91 28.98 0.00 0.00 17149.57 3713.71 22233.69 00:23:15.168 =================================================================================================================== 00:23:15.168 Total : 7417.91 28.98 0.00 0.00 17149.57 3713.71 22233.69 00:23:15.168 0 00:23:15.168 17:10:30 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:15.168 17:10:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:15.426 17:10:31 -- keyring/file.sh@65 -- # get_refcnt key0 00:23:15.426 17:10:31 -- keyring/common.sh@12 -- # get_key key0 00:23:15.426 17:10:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:15.426 17:10:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:15.426 17:10:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:15.426 17:10:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:15.683 17:10:31 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:15.683 17:10:31 -- keyring/file.sh@66 -- # get_refcnt key1 00:23:15.683 17:10:31 -- keyring/common.sh@12 -- # get_key key1 00:23:15.683 17:10:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:15.683 17:10:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:15.683 17:10:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:15.683 17:10:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:15.942 17:10:31 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:15.942 17:10:31 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:15.942 17:10:31 -- common/autotest_common.sh@638 -- # local es=0 00:23:15.942 17:10:31 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:15.942 17:10:31 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:15.942 17:10:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:15.942 17:10:31 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:15.942 17:10:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:15.942 17:10:31 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:15.942 17:10:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:16.201 [2024-04-18 17:10:31.831901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01af0 (107): Transport endpoint is not connected 00:23:16.201 [2024-04-18 17:10:31.831914] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:16.201 [2024-04-18 17:10:31.832890] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01af0 (9): Bad file descriptor 00:23:16.201 [2024-04-18 17:10:31.833888] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.201 [2024-04-18 17:10:31.833911] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:16.201 [2024-04-18 17:10:31.833935] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.201 request: 00:23:16.201 { 00:23:16.201 "name": "nvme0", 00:23:16.201 "trtype": "tcp", 00:23:16.201 "traddr": "127.0.0.1", 00:23:16.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:16.201 "adrfam": "ipv4", 00:23:16.201 "trsvcid": "4420", 00:23:16.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:16.201 "psk": "key1", 00:23:16.201 "method": "bdev_nvme_attach_controller", 00:23:16.201 "req_id": 1 00:23:16.201 } 00:23:16.201 Got JSON-RPC error response 00:23:16.201 response: 00:23:16.201 { 00:23:16.201 "code": -32602, 00:23:16.201 "message": "Invalid parameters" 00:23:16.201 } 00:23:16.201 17:10:31 -- common/autotest_common.sh@641 -- # es=1 00:23:16.201 17:10:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:16.201 17:10:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:16.201 17:10:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:16.201 17:10:31 -- keyring/file.sh@71 -- # get_refcnt key0 00:23:16.201 17:10:31 -- keyring/common.sh@12 -- # get_key key0 00:23:16.201 17:10:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:16.201 17:10:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:16.201 17:10:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:16.201 17:10:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:16.460 17:10:32 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:16.460 17:10:32 -- keyring/file.sh@72 -- # get_refcnt key1 00:23:16.460 17:10:32 -- keyring/common.sh@12 -- # get_key key1 00:23:16.460 17:10:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:16.460 17:10:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:16.460 17:10:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:16.460 17:10:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:16.718 17:10:32 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:16.718 17:10:32 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:16.718 17:10:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:16.976 17:10:32 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:16.976 17:10:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:17.233 17:10:32 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:17.233 17:10:32 -- keyring/file.sh@77 -- # jq length 00:23:17.233 17:10:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:17.491 17:10:33 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:17.491 17:10:33 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.pepaIdJQEy 00:23:17.491 17:10:33 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.pepaIdJQEy 00:23:17.491 17:10:33 -- common/autotest_common.sh@638 -- # local es=0 00:23:17.491 17:10:33 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.pepaIdJQEy 00:23:17.491 17:10:33 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:17.491 17:10:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:17.491 17:10:33 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:17.491 17:10:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:17.491 17:10:33 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pepaIdJQEy 00:23:17.491 17:10:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pepaIdJQEy 00:23:17.749 [2024-04-18 17:10:33.302755] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pepaIdJQEy': 0100660 00:23:17.749 [2024-04-18 17:10:33.302794] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:17.749 request: 00:23:17.749 { 00:23:17.749 "name": "key0", 00:23:17.749 "path": "/tmp/tmp.pepaIdJQEy", 00:23:17.749 "method": "keyring_file_add_key", 00:23:17.749 "req_id": 1 00:23:17.749 } 00:23:17.749 Got JSON-RPC error response 00:23:17.749 response: 00:23:17.749 { 00:23:17.749 "code": -1, 00:23:17.749 "message": "Operation not permitted" 00:23:17.749 } 00:23:17.749 17:10:33 -- common/autotest_common.sh@641 -- # es=1 00:23:17.749 17:10:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:17.749 17:10:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:17.749 17:10:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:17.749 17:10:33 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.pepaIdJQEy 00:23:17.749 17:10:33 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pepaIdJQEy 00:23:17.749 17:10:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pepaIdJQEy 00:23:18.008 17:10:33 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.pepaIdJQEy 00:23:18.008 17:10:33 -- keyring/file.sh@88 -- # get_refcnt key0 00:23:18.008 17:10:33 -- keyring/common.sh@12 -- # get_key key0 00:23:18.008 17:10:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:18.008 17:10:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:18.008 17:10:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:18.008 17:10:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:18.266 17:10:33 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:18.266 17:10:33 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:18.266 17:10:33 -- common/autotest_common.sh@638 -- # local es=0 00:23:18.266 17:10:33 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:18.266 17:10:33 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:18.266 17:10:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:18.266 17:10:33 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:18.266 17:10:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:18.266 17:10:33 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:18.266 17:10:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:18.524 [2024-04-18 17:10:34.032767] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.pepaIdJQEy': No such file or directory 00:23:18.524 [2024-04-18 17:10:34.032802] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:18.524 [2024-04-18 17:10:34.032839] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:18.524 [2024-04-18 17:10:34.032853] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:18.524 [2024-04-18 17:10:34.032866] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:18.524 request: 00:23:18.524 { 00:23:18.524 "name": "nvme0", 00:23:18.524 "trtype": "tcp", 00:23:18.524 "traddr": "127.0.0.1", 00:23:18.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:18.524 "adrfam": "ipv4", 00:23:18.524 "trsvcid": "4420", 00:23:18.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.524 "psk": "key0", 00:23:18.524 "method": "bdev_nvme_attach_controller", 00:23:18.524 "req_id": 1 00:23:18.524 } 00:23:18.524 Got JSON-RPC error response 00:23:18.524 response: 00:23:18.524 { 00:23:18.524 "code": -19, 00:23:18.524 "message": "No such device" 00:23:18.524 } 00:23:18.524 17:10:34 -- common/autotest_common.sh@641 -- # es=1 00:23:18.524 17:10:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:18.524 17:10:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:18.524 17:10:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:18.524 17:10:34 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:18.524 17:10:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:18.782 17:10:34 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:18.782 17:10:34 -- keyring/common.sh@15 -- # local name key digest path 00:23:18.782 17:10:34 -- keyring/common.sh@17 -- # name=key0 00:23:18.782 17:10:34 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:18.782 17:10:34 -- keyring/common.sh@17 -- # digest=0 00:23:18.782 17:10:34 -- keyring/common.sh@18 -- # mktemp 00:23:18.782 17:10:34 -- keyring/common.sh@18 -- # path=/tmp/tmp.1W3wVOSMUO 00:23:18.782 17:10:34 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:18.782 17:10:34 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:18.782 17:10:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:18.782 17:10:34 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:18.782 17:10:34 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:18.782 17:10:34 -- nvmf/common.sh@693 -- # digest=0 00:23:18.782 17:10:34 -- nvmf/common.sh@694 -- # python - 00:23:18.782 17:10:34 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1W3wVOSMUO 00:23:18.782 17:10:34 -- keyring/common.sh@23 -- # echo /tmp/tmp.1W3wVOSMUO 00:23:18.782 17:10:34 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.1W3wVOSMUO 00:23:18.782 17:10:34 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1W3wVOSMUO 00:23:18.782 17:10:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1W3wVOSMUO 00:23:19.040 17:10:34 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:19.040 17:10:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:19.298 nvme0n1 00:23:19.298 17:10:34 -- keyring/file.sh@99 -- # get_refcnt key0 00:23:19.298 17:10:34 -- keyring/common.sh@12 -- # get_key key0 00:23:19.298 17:10:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:19.298 17:10:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:19.298 17:10:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:19.298 17:10:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:19.556 17:10:35 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:19.556 17:10:35 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:19.556 17:10:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:19.813 17:10:35 -- keyring/file.sh@101 -- # get_key key0 00:23:19.813 17:10:35 -- keyring/file.sh@101 -- # jq -r .removed 00:23:19.813 17:10:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:19.813 17:10:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:19.813 17:10:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:20.071 17:10:35 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:20.071 17:10:35 -- keyring/file.sh@102 -- # get_refcnt key0 00:23:20.071 17:10:35 -- keyring/common.sh@12 -- # get_key key0 00:23:20.071 17:10:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:20.071 17:10:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:20.071 17:10:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:20.071 17:10:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:20.329 17:10:35 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:23:20.329 17:10:35 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:20.329 17:10:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:20.587 17:10:36 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:23:20.587 17:10:36 -- keyring/file.sh@104 -- # jq length 00:23:20.587 17:10:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:20.844 17:10:36 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:23:20.844 17:10:36 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1W3wVOSMUO 00:23:20.844 17:10:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1W3wVOSMUO 00:23:21.102 17:10:36 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ShHtpUtqkY 00:23:21.102 17:10:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ShHtpUtqkY 00:23:21.360 17:10:36 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:21.360 17:10:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:21.618 nvme0n1 00:23:21.618 17:10:37 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:23:21.618 17:10:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:21.877 17:10:37 -- keyring/file.sh@112 -- # config='{ 00:23:21.877 "subsystems": [ 00:23:21.877 { 00:23:21.877 "subsystem": "keyring", 00:23:21.877 "config": [ 00:23:21.877 { 00:23:21.877 "method": "keyring_file_add_key", 00:23:21.877 "params": { 00:23:21.877 "name": "key0", 00:23:21.877 "path": "/tmp/tmp.1W3wVOSMUO" 00:23:21.877 } 00:23:21.877 }, 00:23:21.877 { 00:23:21.877 "method": "keyring_file_add_key", 00:23:21.877 "params": { 00:23:21.877 "name": "key1", 00:23:21.877 "path": "/tmp/tmp.ShHtpUtqkY" 00:23:21.877 } 00:23:21.877 } 00:23:21.877 ] 00:23:21.877 }, 00:23:21.877 { 00:23:21.877 "subsystem": "iobuf", 00:23:21.877 "config": [ 00:23:21.877 { 00:23:21.877 "method": "iobuf_set_options", 00:23:21.877 "params": { 00:23:21.877 "small_pool_count": 8192, 00:23:21.877 "large_pool_count": 1024, 00:23:21.877 "small_bufsize": 8192, 00:23:21.877 "large_bufsize": 135168 00:23:21.877 } 00:23:21.877 } 00:23:21.877 ] 00:23:21.877 }, 00:23:21.877 { 00:23:21.877 "subsystem": "sock", 00:23:21.877 "config": [ 00:23:21.877 { 00:23:21.877 "method": "sock_impl_set_options", 00:23:21.877 "params": { 00:23:21.877 "impl_name": "posix", 00:23:21.877 "recv_buf_size": 2097152, 00:23:21.877 "send_buf_size": 2097152, 00:23:21.877 "enable_recv_pipe": true, 00:23:21.877 "enable_quickack": false, 00:23:21.877 "enable_placement_id": 0, 00:23:21.877 "enable_zerocopy_send_server": true, 00:23:21.877 "enable_zerocopy_send_client": false, 00:23:21.877 "zerocopy_threshold": 0, 00:23:21.877 "tls_version": 0, 00:23:21.877 "enable_ktls": false 00:23:21.877 } 00:23:21.877 }, 00:23:21.877 { 00:23:21.877 "method": "sock_impl_set_options", 00:23:21.877 "params": { 00:23:21.877 "impl_name": "ssl", 00:23:21.877 "recv_buf_size": 4096, 00:23:21.877 "send_buf_size": 4096, 00:23:21.877 "enable_recv_pipe": true, 00:23:21.877 "enable_quickack": false, 00:23:21.877 "enable_placement_id": 0, 00:23:21.877 "enable_zerocopy_send_server": true, 00:23:21.877 "enable_zerocopy_send_client": false, 00:23:21.877 "zerocopy_threshold": 0, 00:23:21.877 "tls_version": 0, 00:23:21.877 "enable_ktls": false 00:23:21.877 } 00:23:21.877 } 00:23:21.877 ] 00:23:21.877 }, 00:23:21.877 { 00:23:21.877 "subsystem": "vmd", 00:23:21.877 "config": [] 00:23:21.877 }, 00:23:21.877 { 00:23:21.877 "subsystem": "accel", 00:23:21.877 "config": [ 00:23:21.877 { 00:23:21.877 "method": "accel_set_options", 00:23:21.877 "params": { 00:23:21.877 "small_cache_size": 128, 00:23:21.877 "large_cache_size": 16, 00:23:21.877 "task_count": 2048, 00:23:21.877 "sequence_count": 2048, 00:23:21.877 "buf_count": 2048 00:23:21.877 } 00:23:21.877 } 00:23:21.877 ] 00:23:21.877 }, 00:23:21.877 { 00:23:21.877 "subsystem": "bdev", 00:23:21.877 "config": [ 00:23:21.877 { 00:23:21.877 "method": "bdev_set_options", 00:23:21.877 "params": { 00:23:21.877 "bdev_io_pool_size": 65535, 00:23:21.877 "bdev_io_cache_size": 256, 00:23:21.877 "bdev_auto_examine": true, 00:23:21.877 "iobuf_small_cache_size": 128, 00:23:21.877 "iobuf_large_cache_size": 16 00:23:21.877 } 00:23:21.877 }, 00:23:21.877 { 00:23:21.877 "method": "bdev_raid_set_options", 00:23:21.877 "params": { 00:23:21.877 "process_window_size_kb": 1024 00:23:21.877 } 00:23:21.877 }, 00:23:21.877 { 00:23:21.877 "method": "bdev_iscsi_set_options", 00:23:21.877 "params": { 00:23:21.877 "timeout_sec": 30 00:23:21.877 } 00:23:21.877 }, 00:23:21.877 { 00:23:21.877 "method": "bdev_nvme_set_options", 00:23:21.877 "params": { 00:23:21.877 "action_on_timeout": "none", 00:23:21.877 "timeout_us": 0, 00:23:21.877 "timeout_admin_us": 0, 00:23:21.877 "keep_alive_timeout_ms": 10000, 00:23:21.877 "arbitration_burst": 0, 00:23:21.877 "low_priority_weight": 0, 00:23:21.877 "medium_priority_weight": 0, 00:23:21.877 "high_priority_weight": 0, 00:23:21.877 "nvme_adminq_poll_period_us": 10000, 00:23:21.878 "nvme_ioq_poll_period_us": 0, 00:23:21.878 "io_queue_requests": 512, 00:23:21.878 "delay_cmd_submit": true, 00:23:21.878 "transport_retry_count": 4, 00:23:21.878 "bdev_retry_count": 3, 00:23:21.878 "transport_ack_timeout": 0, 00:23:21.878 "ctrlr_loss_timeout_sec": 0, 00:23:21.878 "reconnect_delay_sec": 0, 00:23:21.878 "fast_io_fail_timeout_sec": 0, 00:23:21.878 "disable_auto_failback": false, 00:23:21.878 "generate_uuids": false, 00:23:21.878 "transport_tos": 0, 00:23:21.878 "nvme_error_stat": false, 00:23:21.878 "rdma_srq_size": 0, 00:23:21.878 "io_path_stat": false, 00:23:21.878 "allow_accel_sequence": false, 00:23:21.878 "rdma_max_cq_size": 0, 00:23:21.878 "rdma_cm_event_timeout_ms": 0, 00:23:21.878 "dhchap_digests": [ 00:23:21.878 "sha256", 00:23:21.878 "sha384", 00:23:21.878 "sha512" 00:23:21.878 ], 00:23:21.878 "dhchap_dhgroups": [ 00:23:21.878 "null", 00:23:21.878 "ffdhe2048", 00:23:21.878 "ffdhe3072", 00:23:21.878 "ffdhe4096", 00:23:21.878 "ffdhe6144", 00:23:21.878 "ffdhe8192" 00:23:21.878 ] 00:23:21.878 } 00:23:21.878 }, 00:23:21.878 { 00:23:21.878 "method": "bdev_nvme_attach_controller", 00:23:21.878 "params": { 00:23:21.878 "name": "nvme0", 00:23:21.878 "trtype": "TCP", 00:23:21.878 "adrfam": "IPv4", 00:23:21.878 "traddr": "127.0.0.1", 00:23:21.878 "trsvcid": "4420", 00:23:21.878 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:21.878 "prchk_reftag": false, 00:23:21.878 "prchk_guard": false, 00:23:21.878 "ctrlr_loss_timeout_sec": 0, 00:23:21.878 "reconnect_delay_sec": 0, 00:23:21.878 "fast_io_fail_timeout_sec": 0, 00:23:21.878 "psk": "key0", 00:23:21.878 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:21.878 "hdgst": false, 00:23:21.878 "ddgst": false 00:23:21.878 } 00:23:21.878 }, 00:23:21.878 { 00:23:21.878 "method": "bdev_nvme_set_hotplug", 00:23:21.878 "params": { 00:23:21.878 "period_us": 100000, 00:23:21.878 "enable": false 00:23:21.878 } 00:23:21.878 }, 00:23:21.878 { 00:23:21.878 "method": "bdev_wait_for_examine" 00:23:21.878 } 00:23:21.878 ] 00:23:21.878 }, 00:23:21.878 { 00:23:21.878 "subsystem": "nbd", 00:23:21.878 "config": [] 00:23:21.878 } 00:23:21.878 ] 00:23:21.878 }' 00:23:21.878 17:10:37 -- keyring/file.sh@114 -- # killprocess 1792479 00:23:21.878 17:10:37 -- common/autotest_common.sh@936 -- # '[' -z 1792479 ']' 00:23:21.878 17:10:37 -- common/autotest_common.sh@940 -- # kill -0 1792479 00:23:21.878 17:10:37 -- common/autotest_common.sh@941 -- # uname 00:23:21.878 17:10:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:21.878 17:10:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1792479 00:23:21.878 17:10:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:21.878 17:10:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:21.878 17:10:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1792479' 00:23:21.878 killing process with pid 1792479 00:23:21.878 17:10:37 -- common/autotest_common.sh@955 -- # kill 1792479 00:23:21.878 Received shutdown signal, test time was about 1.000000 seconds 00:23:21.878 00:23:21.878 Latency(us) 00:23:21.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.878 =================================================================================================================== 00:23:21.878 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.878 17:10:37 -- common/autotest_common.sh@960 -- # wait 1792479 00:23:22.137 17:10:37 -- keyring/file.sh@117 -- # bperfpid=1793921 00:23:22.137 17:10:37 -- keyring/file.sh@119 -- # waitforlisten 1793921 /var/tmp/bperf.sock 00:23:22.137 17:10:37 -- common/autotest_common.sh@817 -- # '[' -z 1793921 ']' 00:23:22.137 17:10:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:22.137 17:10:37 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:22.137 17:10:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:22.137 17:10:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:22.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:22.137 17:10:37 -- keyring/file.sh@115 -- # echo '{ 00:23:22.137 "subsystems": [ 00:23:22.137 { 00:23:22.137 "subsystem": "keyring", 00:23:22.137 "config": [ 00:23:22.137 { 00:23:22.137 "method": "keyring_file_add_key", 00:23:22.137 "params": { 00:23:22.137 "name": "key0", 00:23:22.137 "path": "/tmp/tmp.1W3wVOSMUO" 00:23:22.137 } 00:23:22.137 }, 00:23:22.137 { 00:23:22.137 "method": "keyring_file_add_key", 00:23:22.137 "params": { 00:23:22.137 "name": "key1", 00:23:22.137 "path": "/tmp/tmp.ShHtpUtqkY" 00:23:22.137 } 00:23:22.137 } 00:23:22.137 ] 00:23:22.137 }, 00:23:22.137 { 00:23:22.137 "subsystem": "iobuf", 00:23:22.137 "config": [ 00:23:22.137 { 00:23:22.137 "method": "iobuf_set_options", 00:23:22.137 "params": { 00:23:22.137 "small_pool_count": 8192, 00:23:22.137 "large_pool_count": 1024, 00:23:22.137 "small_bufsize": 8192, 00:23:22.137 "large_bufsize": 135168 00:23:22.137 } 00:23:22.137 } 00:23:22.137 ] 00:23:22.137 }, 00:23:22.137 { 00:23:22.137 "subsystem": "sock", 00:23:22.137 "config": [ 00:23:22.137 { 00:23:22.137 "method": "sock_impl_set_options", 00:23:22.137 "params": { 00:23:22.137 "impl_name": "posix", 00:23:22.137 "recv_buf_size": 2097152, 00:23:22.137 "send_buf_size": 2097152, 00:23:22.137 "enable_recv_pipe": true, 00:23:22.137 "enable_quickack": false, 00:23:22.137 "enable_placement_id": 0, 00:23:22.137 "enable_zerocopy_send_server": true, 00:23:22.137 "enable_zerocopy_send_client": false, 00:23:22.137 "zerocopy_threshold": 0, 00:23:22.137 "tls_version": 0, 00:23:22.137 "enable_ktls": false 00:23:22.137 } 00:23:22.137 }, 00:23:22.137 { 00:23:22.137 "method": "sock_impl_set_options", 00:23:22.137 "params": { 00:23:22.137 "impl_name": "ssl", 00:23:22.137 "recv_buf_size": 4096, 00:23:22.137 "send_buf_size": 4096, 00:23:22.137 "enable_recv_pipe": true, 00:23:22.137 "enable_quickack": false, 00:23:22.137 "enable_placement_id": 0, 00:23:22.137 "enable_zerocopy_send_server": true, 00:23:22.137 "enable_zerocopy_send_client": false, 00:23:22.137 "zerocopy_threshold": 0, 00:23:22.137 "tls_version": 0, 00:23:22.137 "enable_ktls": false 00:23:22.137 } 00:23:22.137 } 00:23:22.137 ] 00:23:22.137 }, 00:23:22.137 { 00:23:22.137 "subsystem": "vmd", 00:23:22.137 "config": [] 00:23:22.137 }, 00:23:22.137 { 00:23:22.137 "subsystem": "accel", 00:23:22.137 "config": [ 00:23:22.137 { 00:23:22.137 "method": "accel_set_options", 00:23:22.138 "params": { 00:23:22.138 "small_cache_size": 128, 00:23:22.138 "large_cache_size": 16, 00:23:22.138 "task_count": 2048, 00:23:22.138 "sequence_count": 2048, 00:23:22.138 "buf_count": 2048 00:23:22.138 } 00:23:22.138 } 00:23:22.138 ] 00:23:22.138 }, 00:23:22.138 { 00:23:22.138 "subsystem": "bdev", 00:23:22.138 "config": [ 00:23:22.138 { 00:23:22.138 "method": "bdev_set_options", 00:23:22.138 "params": { 00:23:22.138 "bdev_io_pool_size": 65535, 00:23:22.138 "bdev_io_cache_size": 256, 00:23:22.138 "bdev_auto_examine": true, 00:23:22.138 "iobuf_small_cache_size": 128, 00:23:22.138 "iobuf_large_cache_size": 16 00:23:22.138 } 00:23:22.138 }, 00:23:22.138 { 00:23:22.138 "method": "bdev_raid_set_options", 00:23:22.138 "params": { 00:23:22.138 "process_window_size_kb": 1024 00:23:22.138 } 00:23:22.138 }, 00:23:22.138 { 00:23:22.138 "method": "bdev_iscsi_set_options", 00:23:22.138 "params": { 00:23:22.138 "timeout_sec": 30 00:23:22.138 } 00:23:22.138 }, 00:23:22.138 { 00:23:22.138 "method": "bdev_nvme_set_options", 00:23:22.138 "params": { 00:23:22.138 "action_on_timeout": "none", 00:23:22.138 "timeout_us": 0, 00:23:22.138 "timeout_admin_us": 0, 00:23:22.138 "keep_alive_timeout_ms": 10000, 00:23:22.138 "arbitration_burst": 0, 00:23:22.138 "low_priority_weight": 0, 00:23:22.138 "medium_priority_weight": 0, 00:23:22.138 "high_priority_weight": 0, 00:23:22.138 "nvme_adminq_poll_period_us": 10000, 00:23:22.138 "nvme_ioq_poll_period_us": 0, 00:23:22.138 "io_queue_requests": 512, 00:23:22.138 "delay_cmd_submit": true, 00:23:22.138 "transport_retry_count": 4, 00:23:22.138 "bdev_retry_count": 3, 00:23:22.138 "transport_ack_timeout": 0, 00:23:22.138 "ctrlr_loss_timeout_sec": 0, 00:23:22.138 "reconnect_delay_sec": 0, 00:23:22.138 "fast_io_fail_timeout_sec": 0, 00:23:22.138 "disable_auto_failback": false, 00:23:22.138 "generate_uuids": false, 00:23:22.138 "transport_tos": 0, 00:23:22.138 "nvme_error_stat": false, 00:23:22.138 "rdma_srq_size": 0, 00:23:22.138 "io_path_stat": false, 00:23:22.138 "allow_accel_sequence": false, 00:23:22.138 "rdma_max_cq_size": 0, 00:23:22.138 "rdma_cm_event_timeout_ms": 0, 00:23:22.138 "dhchap_digests": [ 00:23:22.138 "sha256", 00:23:22.138 "sha384", 00:23:22.138 "sha512" 00:23:22.138 ], 00:23:22.138 "dhchap_dhgroups": [ 00:23:22.138 "null", 00:23:22.138 "ffdhe2048", 00:23:22.138 "ffdhe3072", 00:23:22.138 "ffdhe4096", 00:23:22.138 "ffdhe6144", 00:23:22.138 "ffdhe8192" 00:23:22.138 ] 00:23:22.138 } 00:23:22.138 }, 00:23:22.138 { 00:23:22.138 "method": "bdev_nvme_attach_controller", 00:23:22.138 "params": { 00:23:22.138 "name": "nvme0", 00:23:22.138 "trtype": "TCP", 00:23:22.138 "adrfam": "IPv4", 00:23:22.138 "traddr": "127.0.0.1", 00:23:22.138 "trsvcid": "4420", 00:23:22.138 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:22.138 "prchk_reftag": false, 00:23:22.138 "prchk_guard": false, 00:23:22.138 "ctrlr_loss_timeout_sec": 0, 00:23:22.138 "reconnect_delay_sec": 0, 00:23:22.138 "fast_io_fail_timeout_sec": 0, 00:23:22.138 "psk": "key0", 00:23:22.138 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:22.138 "hdgst": false, 00:23:22.138 "ddgst": false 00:23:22.138 } 00:23:22.138 }, 00:23:22.138 { 00:23:22.138 "method": "bdev_nvme_set_hotplug", 00:23:22.138 "params": { 00:23:22.138 "period_us": 100000, 00:23:22.138 "enable": false 00:23:22.138 } 00:23:22.138 }, 00:23:22.138 { 00:23:22.138 "method": "bdev_wait_for_examine" 00:23:22.138 } 00:23:22.138 ] 00:23:22.138 }, 00:23:22.138 { 00:23:22.138 "subsystem": "nbd", 00:23:22.138 "config": [] 00:23:22.138 } 00:23:22.138 ] 00:23:22.138 }' 00:23:22.138 17:10:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:22.138 17:10:37 -- common/autotest_common.sh@10 -- # set +x 00:23:22.138 [2024-04-18 17:10:37.794747] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:22.138 [2024-04-18 17:10:37.794846] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1793921 ] 00:23:22.138 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.397 [2024-04-18 17:10:37.853290] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.397 [2024-04-18 17:10:37.958824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.688 [2024-04-18 17:10:38.145508] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.253 17:10:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:23.253 17:10:38 -- common/autotest_common.sh@850 -- # return 0 00:23:23.253 17:10:38 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:23:23.253 17:10:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:23.253 17:10:38 -- keyring/file.sh@120 -- # jq length 00:23:23.512 17:10:38 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:23:23.512 17:10:38 -- keyring/file.sh@121 -- # get_refcnt key0 00:23:23.512 17:10:38 -- keyring/common.sh@12 -- # get_key key0 00:23:23.512 17:10:38 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:23.512 17:10:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:23.512 17:10:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:23.512 17:10:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:23.512 17:10:39 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:23.512 17:10:39 -- keyring/file.sh@122 -- # get_refcnt key1 00:23:23.770 17:10:39 -- keyring/common.sh@12 -- # get_key key1 00:23:23.770 17:10:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:23.770 17:10:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:23.770 17:10:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:23.770 17:10:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:23.770 17:10:39 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:23:23.770 17:10:39 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:23:23.770 17:10:39 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:23:23.770 17:10:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:24.029 17:10:39 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:23:24.029 17:10:39 -- keyring/file.sh@1 -- # cleanup 00:23:24.029 17:10:39 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1W3wVOSMUO /tmp/tmp.ShHtpUtqkY 00:23:24.029 17:10:39 -- keyring/file.sh@20 -- # killprocess 1793921 00:23:24.029 17:10:39 -- common/autotest_common.sh@936 -- # '[' -z 1793921 ']' 00:23:24.029 17:10:39 -- common/autotest_common.sh@940 -- # kill -0 1793921 00:23:24.029 17:10:39 -- common/autotest_common.sh@941 -- # uname 00:23:24.029 17:10:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:24.029 17:10:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1793921 00:23:24.287 17:10:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:24.288 17:10:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:24.288 17:10:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1793921' 00:23:24.288 killing process with pid 1793921 00:23:24.288 17:10:39 -- common/autotest_common.sh@955 -- # kill 1793921 00:23:24.288 Received shutdown signal, test time was about 1.000000 seconds 00:23:24.288 00:23:24.288 Latency(us) 00:23:24.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.288 =================================================================================================================== 00:23:24.288 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:24.288 17:10:39 -- common/autotest_common.sh@960 -- # wait 1793921 00:23:24.545 17:10:40 -- keyring/file.sh@21 -- # killprocess 1792468 00:23:24.545 17:10:40 -- common/autotest_common.sh@936 -- # '[' -z 1792468 ']' 00:23:24.545 17:10:40 -- common/autotest_common.sh@940 -- # kill -0 1792468 00:23:24.545 17:10:40 -- common/autotest_common.sh@941 -- # uname 00:23:24.545 17:10:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:24.545 17:10:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1792468 00:23:24.545 17:10:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:24.545 17:10:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:24.545 17:10:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1792468' 00:23:24.545 killing process with pid 1792468 00:23:24.545 17:10:40 -- common/autotest_common.sh@955 -- # kill 1792468 00:23:24.545 [2024-04-18 17:10:40.038569] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:24.545 17:10:40 -- common/autotest_common.sh@960 -- # wait 1792468 00:23:25.111 00:23:25.111 real 0m14.095s 00:23:25.111 user 0m34.639s 00:23:25.111 sys 0m3.394s 00:23:25.111 17:10:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:25.111 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:23:25.111 ************************************ 00:23:25.111 END TEST keyring_file 00:23:25.111 ************************************ 00:23:25.111 17:10:40 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:23:25.111 17:10:40 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:23:25.111 17:10:40 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:23:25.111 17:10:40 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:23:25.111 17:10:40 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:25.111 17:10:40 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:23:25.111 17:10:40 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:25.111 17:10:40 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:23:25.111 17:10:40 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:23:25.111 17:10:40 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:23:25.111 17:10:40 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:25.111 17:10:40 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:23:25.111 17:10:40 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:23:25.111 17:10:40 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:23:25.111 17:10:40 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:23:25.111 17:10:40 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:23:25.111 17:10:40 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:23:25.111 17:10:40 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:23:25.111 17:10:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:25.111 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:23:25.111 17:10:40 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:23:25.111 17:10:40 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:23:25.111 17:10:40 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:23:25.111 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:23:27.013 INFO: APP EXITING 00:23:27.013 INFO: killing all VMs 00:23:27.013 INFO: killing vhost app 00:23:27.013 WARN: no vhost pid file found 00:23:27.013 INFO: EXIT DONE 00:23:27.975 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:23:27.975 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:23:27.975 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:23:27.975 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:23:27.975 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:23:27.975 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:23:27.975 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:23:27.975 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:23:27.975 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:23:27.975 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:23:27.975 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:23:27.975 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:23:27.975 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:23:27.975 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:23:27.975 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:23:27.975 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:23:27.975 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:23:29.351 Cleaning 00:23:29.351 Removing: /var/run/dpdk/spdk0/config 00:23:29.351 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:29.351 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:29.351 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:29.351 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:29.351 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:23:29.351 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:23:29.351 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:23:29.351 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:23:29.351 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:29.351 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:29.351 Removing: /var/run/dpdk/spdk1/config 00:23:29.351 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:29.351 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:29.351 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:29.351 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:29.351 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:23:29.351 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:23:29.351 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:23:29.351 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:23:29.351 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:29.351 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:29.351 Removing: /var/run/dpdk/spdk1/mp_socket 00:23:29.351 Removing: /var/run/dpdk/spdk2/config 00:23:29.351 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:29.351 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:29.351 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:29.351 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:29.351 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:23:29.351 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:23:29.351 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:23:29.351 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:23:29.351 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:29.351 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:29.351 Removing: /var/run/dpdk/spdk3/config 00:23:29.351 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:29.351 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:29.351 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:29.351 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:29.351 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:23:29.351 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:23:29.351 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:23:29.351 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:23:29.351 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:29.351 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:29.351 Removing: /var/run/dpdk/spdk4/config 00:23:29.351 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:29.351 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:29.351 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:29.351 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:29.351 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:23:29.351 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:23:29.351 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:23:29.351 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:23:29.351 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:29.351 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:29.351 Removing: /dev/shm/bdev_svc_trace.1 00:23:29.351 Removing: /dev/shm/nvmf_trace.0 00:23:29.351 Removing: /dev/shm/spdk_tgt_trace.pid1565652 00:23:29.351 Removing: /var/run/dpdk/spdk0 00:23:29.351 Removing: /var/run/dpdk/spdk1 00:23:29.351 Removing: /var/run/dpdk/spdk2 00:23:29.351 Removing: /var/run/dpdk/spdk3 00:23:29.351 Removing: /var/run/dpdk/spdk4 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1563881 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1564754 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1565652 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1566200 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1566897 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1567039 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1567771 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1567909 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1568168 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1569475 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1571028 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1571225 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1571425 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1571762 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1572090 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1572267 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1572427 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1572726 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1573211 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1575567 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1575870 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1576172 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1576292 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1576620 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1576753 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1577066 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1577203 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1577499 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1577510 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1577684 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1577818 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1578195 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1578364 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1578684 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1578873 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1578902 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1579108 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1579395 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1579561 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1579726 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1580009 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1580173 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1580451 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1580623 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1580824 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1581065 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1581235 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1581509 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1581685 00:23:29.351 Removing: /var/run/dpdk/spdk_pid1581901 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1582127 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1582295 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1582573 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1582749 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1583028 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1583197 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1583366 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1583560 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1583915 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1586113 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1612602 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1615106 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1620992 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1624423 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1626742 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1627189 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1634466 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1634468 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1635126 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1635776 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1636326 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1636725 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1636843 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1636990 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1637115 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1637128 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1637876 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1638437 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1639603 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1639999 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1640005 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1640266 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1641163 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1641887 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1647276 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1647551 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1650201 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1653918 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1655977 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1662389 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1667721 00:23:29.352 Removing: /var/run/dpdk/spdk_pid1668917 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1669604 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1680553 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1682783 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1685573 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1686757 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1688008 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1688096 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1688235 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1688375 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1688815 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1690147 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1691006 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1691432 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1693056 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1693485 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1694054 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1696581 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1702367 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1705624 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1709399 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1710479 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1711580 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1714136 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1716508 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1720874 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1720876 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1723782 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1723917 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1724059 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1724321 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1724332 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1726960 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1727289 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1729838 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1731817 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1735370 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1738510 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1742910 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1742913 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1755462 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1755995 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1756406 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1756937 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1757529 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1757939 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1758348 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1758781 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1761262 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1761520 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1765328 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1765513 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1767130 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1772179 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1772185 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1775160 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1776518 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1778522 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1779404 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1780815 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1781699 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1786994 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1787382 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1787774 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1789283 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1789619 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1790015 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1792468 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1792479 00:23:29.610 Removing: /var/run/dpdk/spdk_pid1793921 00:23:29.610 Clean 00:23:29.610 17:10:45 -- common/autotest_common.sh@1437 -- # return 0 00:23:29.610 17:10:45 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:23:29.610 17:10:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:29.610 17:10:45 -- common/autotest_common.sh@10 -- # set +x 00:23:29.868 17:10:45 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:23:29.868 17:10:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:29.868 17:10:45 -- common/autotest_common.sh@10 -- # set +x 00:23:29.868 17:10:45 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:23:29.868 17:10:45 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:23:29.868 17:10:45 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:23:29.868 17:10:45 -- spdk/autotest.sh@389 -- # hash lcov 00:23:29.868 17:10:45 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:29.868 17:10:45 -- spdk/autotest.sh@391 -- # hostname 00:23:29.868 17:10:45 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:23:29.868 geninfo: WARNING: invalid characters removed from testname! 00:24:01.950 17:11:12 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:01.950 17:11:16 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:03.851 17:11:19 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:07.131 17:11:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:09.715 17:11:25 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:12.241 17:11:27 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:15.522 17:11:30 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:15.522 17:11:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.522 17:11:30 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:15.522 17:11:30 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.522 17:11:30 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.522 17:11:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.522 17:11:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.522 17:11:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.522 17:11:30 -- paths/export.sh@5 -- $ export PATH 00:24:15.522 17:11:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.522 17:11:30 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:24:15.522 17:11:30 -- common/autobuild_common.sh@435 -- $ date +%s 00:24:15.522 17:11:30 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713453090.XXXXXX 00:24:15.523 17:11:30 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713453090.iLBlqP 00:24:15.523 17:11:30 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:24:15.523 17:11:30 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:24:15.523 17:11:30 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:24:15.523 17:11:30 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:24:15.523 17:11:30 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:24:15.523 17:11:30 -- common/autobuild_common.sh@451 -- $ get_config_params 00:24:15.523 17:11:30 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:24:15.523 17:11:30 -- common/autotest_common.sh@10 -- $ set +x 00:24:15.523 17:11:30 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:24:15.523 17:11:30 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:24:15.523 17:11:30 -- pm/common@17 -- $ local monitor 00:24:15.523 17:11:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:15.523 17:11:30 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1802484 00:24:15.523 17:11:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:15.523 17:11:30 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1802486 00:24:15.523 17:11:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:15.523 17:11:30 -- pm/common@21 -- $ date +%s 00:24:15.523 17:11:30 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1802488 00:24:15.523 17:11:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:15.523 17:11:30 -- pm/common@21 -- $ date +%s 00:24:15.523 17:11:30 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1802491 00:24:15.523 17:11:30 -- pm/common@21 -- $ date +%s 00:24:15.523 17:11:30 -- pm/common@26 -- $ sleep 1 00:24:15.523 17:11:30 -- pm/common@21 -- $ date +%s 00:24:15.523 17:11:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713453090 00:24:15.523 17:11:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713453090 00:24:15.523 17:11:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713453090 00:24:15.523 17:11:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713453090 00:24:15.523 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713453090_collect-vmstat.pm.log 00:24:15.523 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713453090_collect-bmc-pm.bmc.pm.log 00:24:15.523 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713453090_collect-cpu-load.pm.log 00:24:15.523 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713453090_collect-cpu-temp.pm.log 00:24:16.456 17:11:31 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:24:16.456 17:11:31 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:24:16.456 17:11:31 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:16.456 17:11:31 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:24:16.456 17:11:31 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:24:16.456 17:11:31 -- spdk/autopackage.sh@19 -- $ timing_finish 00:24:16.456 17:11:31 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:16.456 17:11:31 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:24:16.456 17:11:31 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:24:16.456 17:11:31 -- spdk/autopackage.sh@20 -- $ exit 0 00:24:16.456 17:11:31 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:16.456 17:11:31 -- pm/common@30 -- $ signal_monitor_resources TERM 00:24:16.456 17:11:31 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:24:16.456 17:11:31 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:16.456 17:11:31 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:24:16.456 17:11:31 -- pm/common@45 -- $ pid=1802504 00:24:16.456 17:11:31 -- pm/common@52 -- $ sudo kill -TERM 1802504 00:24:16.456 17:11:31 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:16.456 17:11:31 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:24:16.456 17:11:31 -- pm/common@45 -- $ pid=1802503 00:24:16.456 17:11:31 -- pm/common@52 -- $ sudo kill -TERM 1802503 00:24:16.456 17:11:31 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:16.456 17:11:31 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:24:16.456 17:11:31 -- pm/common@45 -- $ pid=1802502 00:24:16.456 17:11:31 -- pm/common@52 -- $ sudo kill -TERM 1802502 00:24:16.456 17:11:31 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:16.456 17:11:31 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:24:16.456 17:11:31 -- pm/common@45 -- $ pid=1802501 00:24:16.456 17:11:31 -- pm/common@52 -- $ sudo kill -TERM 1802501 00:24:16.456 + [[ -n 1481126 ]] 00:24:16.456 + sudo kill 1481126 00:24:16.464 [Pipeline] } 00:24:16.479 [Pipeline] // stage 00:24:16.483 [Pipeline] } 00:24:16.497 [Pipeline] // timeout 00:24:16.501 [Pipeline] } 00:24:16.515 [Pipeline] // catchError 00:24:16.519 [Pipeline] } 00:24:16.533 [Pipeline] // wrap 00:24:16.538 [Pipeline] } 00:24:16.548 [Pipeline] // catchError 00:24:16.554 [Pipeline] stage 00:24:16.555 [Pipeline] { (Epilogue) 00:24:16.564 [Pipeline] catchError 00:24:16.565 [Pipeline] { 00:24:16.574 [Pipeline] echo 00:24:16.575 Cleanup processes 00:24:16.579 [Pipeline] sh 00:24:16.854 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:16.854 1802628 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:24:16.854 1802766 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:16.867 [Pipeline] sh 00:24:17.141 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:17.141 ++ grep -v 'sudo pgrep' 00:24:17.141 ++ awk '{print $1}' 00:24:17.141 + sudo kill -9 1802628 00:24:17.152 [Pipeline] sh 00:24:17.432 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:25.544 [Pipeline] sh 00:24:25.821 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:25.821 Artifacts sizes are good 00:24:25.835 [Pipeline] archiveArtifacts 00:24:25.841 Archiving artifacts 00:24:26.006 [Pipeline] sh 00:24:26.280 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:24:26.294 [Pipeline] cleanWs 00:24:26.302 [WS-CLEANUP] Deleting project workspace... 00:24:26.302 [WS-CLEANUP] Deferred wipeout is used... 00:24:26.307 [WS-CLEANUP] done 00:24:26.308 [Pipeline] } 00:24:26.327 [Pipeline] // catchError 00:24:26.338 [Pipeline] sh 00:24:26.616 + logger -p user.info -t JENKINS-CI 00:24:26.624 [Pipeline] } 00:24:26.640 [Pipeline] // stage 00:24:26.645 [Pipeline] } 00:24:26.662 [Pipeline] // node 00:24:26.669 [Pipeline] End of Pipeline 00:24:26.706 Finished: SUCCESS